Next Article in Journal
Spearman’s Hypothesis Tested on Black Adults: A Meta-Analysis
Next Article in Special Issue
Trait Characteristics of Diffusion Model Parameters
Previous Article in Journal
The Gift that Keeps on Giving—But for How Long?
Previous Article in Special Issue
Phenotypic, Genetic, and Environmental Correlations between Reaction Times and Intelligence in Young Twin Children
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Validity of the Worst Performance Rule as a Function of Task Complexity and Psychometric g: On the Crucial Role of g Saturation

by
Thomas H. Rammsayer
1,2,* and
Stefan J. Troche
3
1
Institute of Psychology, University of Bern, Fabrikstrasse 8, CH-3012 Bern, Switzerland
2
Center for Cognition, Learning, and Memory, University of Bern, CH-3012 Bern, Switzerland
3
Department of Psychology and Psychotherapy, University of Witten/Herdecke, D-58448 Witten, Germany
*
Author to whom correspondence should be addressed.
Submission received: 26 November 2015 / Revised: 2 March 2016 / Accepted: 9 March 2016 / Published: 16 March 2016
(This article belongs to the Special Issue Mental Speed and Response Times in Cognitive Tests)

Abstract

:
Within the mental speed approach to intelligence, the worst performance rule (WPR) states that the slower trials of a reaction time (RT) task reveal more about intelligence than do faster trials. There is some evidence that the validity of the WPR may depend on high g saturation of both the RT task and the intelligence test applied. To directly assess the concomitant influence of task complexity, as an indicator of task-related g load, and g saturation of the psychometric measure of intelligence on the WPR, data from 245 younger adults were analyzed. To obtain a highly g-loaded measure of intelligence, psychometric g was derived from 12 intelligence scales. This g factor was contrasted with the mental ability scale that showed the smallest factor loading on g. For experimental manipulation of g saturation of the mental speed task, three versions of a Hick RT task with increasing levels of task complexity were applied. While there was no indication for a general WPR effect when a low g-saturated measure of intelligence was used, the WPR could be confirmed for the highly g-loaded measure of intelligence. In this latter condition, the correlation between worst performance and psychometric g was also significantly higher for the more complex 1-bit and 2-bit conditions than for the 0-bit condition of the Hick task. Our findings clearly indicate that the WPR depends primarily on the g factor and, thus, only holds for the highly g-loaded measure of psychometric intelligence.

1. Introduction

Over the last four decades, the mental speed approach to human intelligence has provided accumulating evidence for a positive relationship between an individual’s general intelligence, also referred to as psychometric g [1,2], and his/her speed of information processing as indexed by reaction time (RT) measures (for reviews see [3,4]). Within this conceptual framework, intra-individual variability in RT has also become of major interest, as it appeared that a person’s level of psychometric g is, usually, slightly more strongly related to the standard deviation of his/her RTs over n trials (RTSD) than to his/her mean RT (e.g., [5]). These findings indicate faster and less variable RTs for individuals with high compared to individuals with low general intelligence. Moreover, an almost perfect positive correlation between mean RT and RTSD, in combination with the finding that both these variables are related to psychometric g, supports the notion of a common process, the g factor, exercising a controlling influence on RT, RTSD, and psychometric g [3,5].
Although the very nature of this fundamental process or its biological substrate is still unknown, there were several attempts to conceptualize such a process. Biological accounts refer to the rate of neural oscillations [3,5,6,7], transmission errors [8,9,10], or temporal resolution power of the brain [11,12,13,14,15], while cognitive accounts emphasize lapses of attention (e.g., [16,17]), lapses in the chaining of working memory processes [18], or the speed at which information is accumulated [19]. According to these various accounts, prolonged RT, increased RTSD, and poor psychometric intelligence were assumed a consequence of a lower rate of neural oscillations, lower temporal resolution power or more transmission errors in the brain, a larger number of lapses of attention or in chaining of working memory processes, or slower speed of information accumulation, respectively.
As a general rule, RT distributions are always positively skewed and, at the same time, RTs generated by less intelligent people are more spread out than those for high-intelligent people. The reason underlying these empirical findings is twofold: First, there is a physiological limit to the speed of reaction at around 100 ms at the low end of the RT scale [20]. At the high end of the RT scale, on the other hand, there is no natural limit on the slowness of reaction. Second, in low-intelligent individuals, trials of a given RT task are more frequently and more strongly negatively affected due to the assumed neurocognitive deficit compared to high-intelligent individuals (cf. [6,16,21]). These two factors combined not only cause a right-skewed RT distribution but also contribute to the observation that differences in RT between low- and high-intelligent individuals become more evident for longer than for short individual RTs. This phenomenon has been referred to as the Worst Performance Rule (WPR) by Larson and Alderton [18]. More precisely, the WPR states that “the worst RT trials reveal more about intelligence than do other portions of the RT distribution” [18] (p. 310).
The WPR proved to be consistent with older data sets on the RT–IQ relationship in low- and high-intelligent participants (e.g., [6,7,16]). Also several subsequent studies, using quite different RT tasks, provided empirical evidence for the validity of the WPR (e.g., [17,22,23,24,25]).
Despite these findings in favor of the WPR, there are also reports challenging the validity of the WPR. In a sample of adults ranging from 18 to 83 years, Salthouse [21] found a very similar pattern across a set of various RT tasks: fast and slow RTs correlated with intelligence to about the same extent and, thus, did not support the WPR. More recently, a study by Ratcliff, Thapar, and McKoon [26], comprising RTs for numerosity discrimination, recognition memory, and lexical decision, failed to support the WPR as its basic premise, namely that IQ should be more strongly correlated with slow than fast responses, was not consistently met by the data. Eventually, Madison, Forsman, Blom, Karabanov, and Ullén [27] showed that variability in isochronous interval production—a simple, automatic timing task—was negatively correlated with intelligence. There was, however, no indication that trials with high variability (i.e., worst performance trials) were better predictors of intelligence than the trials where the participant performed optimally (i.e., best performance trials). These latter findings suggest that the validity of the WPR is less universal than might be deduced from the rather large number of positive results.
Proceeding from the common assumption that g reflects some fundamental biological, yet still unknown, property of the brain that manifests itself to some degree in all mental activities, performance on various cognitive tasks can be understood to reflect g, though to different degrees [1]. Typically, g saturation of a given task is assumed to be positively related to task complexity. More precisely, within the framework of the mental speed approach, the task complexity hypothesis states that RTs obtained with more complex versions of an information processing task correlate more highly with psychometric g than less complex versions of the same task (e.g., [28,29,30,31]).
Based on an examination of “virtually the entire literature” (p. 183) on task complexity and the RT-IQ relationship, Jensen [3] arrived at the First Law of Individual Differences stating that “individual differences in learning and performance increase monotonically as a function of increasing task complexity” (p. 205). Within the framework of the WPR, this rule implies that enhancing task complexity should result in a much more pronounced increase of the slowest RTs than of the fastest RTs. As a further consequence, enhanced task complexity should also magnify the RT differences between groups that differ in general intelligence. In other words, differences in RT between low- and high-IQ individuals should become more pronounced with increasing RT task complexity as more complex RT tasks have higher g saturation than less complex RT tasks and, thus, account for a larger portion of variance in psychometric g [1,6].
Nevertheless, empirical studies directly comparing the influence of task complexity on the WPR are extremely scant. First evidence for an effect of task complexity was reported by Jensen [6]. A trial-by-trial comparison of 46 mildly retarded and 50 bright, normal young adults revealed larger RT differences between the two groups for the slowest than for the fastest trials on a simple RT task. These RT differences, particularly for the slowest RT trials, were substantially magnified when the participants performed a more complex eight-choice RT task. Kranzler [24] administered his subjects a simple RT task, an eight-choice RT task, and an odd-man-out RT task. Results indicated that the correlations between RT bands (rank-ordered from the trial with the fastest to the trial with the slowest RT for each individual) and psychometric g varied with task complexity. Although for the least complex simple RT task, no linear increase in correlations with g across RT bands could be observed, the correlation increased linearly from the fastest to the slowest RT bands for the two more complex tasks. Therefore, Kranzler concluded that the WPR only holds for relatively complex and, thus, highly g-loaded RT tasks.
More recently, Fernandez et al. [23] investigated the influence of task complexity on the WPR in children, young adults, and older adults. For this purpose, a simple RT task, a two-choice RT task, and a color-naming Stroop task were used to experimentally vary the level of task complexity. While in all age groups, and for all tasks, the WPR could be confirmed, an effect of task complexity on WPR was limited to children and older adults. In both these latter groups, worst performance trials of the choice RT task explained more variance in intelligence than worst performance trials of the simple RT task. Similarly, worst performance trials of the incongruent condition of the Stroop task accounted for a larger portion of variance in intelligence than worst performance trials of the choice RT task and the congruent condition of the Stroop task. No such mediating influence of task complexity on WPR could be established for young adults—maybe due to restricted variance of psychometric intelligence in this latter group of participants [23] (p. 38).
In addition, no effect of task complexity on WPR was found in a study by Diascro and Brody [22]. These authors endorsed the validity of the WPR for RTs obtained from the detection of straight and slanted lines in the presence of slanted and straight distractor lines. With regard to task complexity, the most intriguing aspect of this study was that detection of a slanted line is based on parallel processing, whereas detection of straight lines requires serial processing [32]. Hence, detection of a straight line can be considered a more complex task than detection of a slanted line. This prediction was corroborated by faster RTs for the detection of slanted lines than for the detection of straight lines. This difference in task complexity, however, did not affect the RT–IQ correlation; correlations between IQ and worst RTs, for detection of both straight and slanted lines, were virtually identical.
At least two reasons may account for these rather mixed and inconclusive results. First, the RT tasks for indexing mental speed differed considerably across studies. The only exceptions may be the simple and eight-choice RT tasks applied by Jensen [6] and Kranzler [24]. Second, in all four studies, different psychometric tests for the assessment of the individual levels of psychometric g were used. While Kranzler [24] derived individual g scores from the Multidimensional Aptitude Battery [33], Diascro and Brody [22] and Fernandez et al. [23] applied the Culture-Fair IQ Test Scale 3 [34] and the Raven Standard Progressive Matrices [35], respectively, as a measure of psychometric g. No detailed information on the psychometric assessment of g was provided by Jensen [6]. The different RT tasks as well as the various psychometric measures of g, applied in all these studies, were highly likely to differ in g load. Thus, if a high g loading is essential for the WPR to become effective, differences in g saturation, in both the RT tasks and the obtained measures of psychometric g, may represent a decisive factor contributing to the inconsistent results. Converging evidence for this conclusion can be derived from Larson and Alderton’s [18] study where the WPR was found to particularly hold for presumably highly g-loaded psychometric measures of intelligence, such as a composite index of fluid and crystallized intelligence, rather than for an intelligence measure assumed to be low in g saturation referred to as a clerical speed composite. Unfortunately, Larson and Alderton did not derive their measures of intelligence with different levels of g saturation from factor analysis of a correlation matrix. Instead, they obtained their composite measures of psychometric intelligence by combining standardized and averaged scores from different scales that may reflect g to some extent but also reflect first- and second-order factors and specificity [1,3]. Thus, the real differences in g saturation of their various composite measures remained rather unclear and arguable. To the best of our knowledge, there are no other studies that directly addressed the effect of differences in g saturation of psychometric measure of intelligence on the validity of the WPR. At this point it appears that, based on the available data, the validity of the WPR may depend on high g saturation of both the cognitive (RT) tasks as well as the psychometric intelligence tests applied. The present study, therefore, was designed to directly assess the influence of task complexity and g saturation of the psychometric measure of intelligence on the WPR. For this purpose, two levels of psychometric g and three levels of task complexity of the same type of RT task were utilized. To arrive at a highly g-loaded psychometric measure of intelligence, psychometric g was derived from 12 intelligence scales corresponding to Thurstone’s [36,37] primary mental abilities. This measure was contrasted with the mental ability with the least g saturation, i.e., the aspect of intelligence that showed the smallest factor loading on psychometric g. For experimental manipulation of task complexity, three different conditions of a Hick RT task were applied.
Thus, based on the above considerations, we aimed at evaluating the following predictions: (1) If the WPR is universally valid, the (negative) correlation between the slowest RTs (i.e., worst performance) and psychometric intelligence should be higher than the correlation between the fastest RTs (i.e., best performance) and psychometric intelligence irrespective of RT task complexity and g saturation of the psychometric measure of intelligence; (2) If, however, the validity of the WPR depends on high g saturation, then a stronger correlational relationship between worst RT performance and psychometric intelligence is expected with increasing complexity of the Hick RT task as well as with higher g saturation of the applied measure of intelligence.

2. Method

2.1. Participants

In order to achieve a sample size that provided reliable data for WPR analyses, we fell back on a pooled sample reported by Helmbold, Troche, and Rammsayer [38]. This sample comprised 260 participants (130 male and 130 female). For our WPR analyses, we excluded those participants with invalid trials in one or more conditions of the Hick task. Incorrect responses and RTs shorter than 100 ms (see [20]) or longer than 1000 ms (see [39]) were considered invalid trials. This resulted in a final sample of 122 male and 123 female younger adults ranging in age from 18 to 39 years (mean ± standard deviation: 24.7 ± 5.57 years). Education levels spanned a broad range, including 85 university students, 74 vocational school pupils and apprentices, as well as 12 persons who were unemployed. The 74 remaining participants were working persons of different professions. All participants reported normal hearing and normal or corrected-to-normal sight. Before being enrolled in the study, each participant was informed about the study protocol and gave his/her written informed consent.

2.2. Intelligence Tests

In order to cover a large range of different cognitive abilities and, thus, define a psychometric measure highly saturated in psychometric g, a comprehensive test battery was employed (cf. [1,40]). The battery included 12 intelligence scales assessing various aspects of intelligence corresponding to eight primary mental abilities suggested by Thurstone [36,37]. As a measure of reasoning abilities, the short version of the German adaptation of Cattell’s Culture Fair Intelligence Test, Scale 3 (CFT) [41] by Weiß [42] was employed. Verbal comprehension, word fluency, space, and flexibility of closure were assessed by subtests of the Leistungsprüfsystem (LPS) [43]. In addition, scales measuring numerical intelligence and verbal, numerical, and spatial memory, respectively, were taken from the Berlin Intelligence Structure Test (BIS) [44]. A brief description of the components of the entire battery is presented in Table 1.

2.3. Hick Reaction Time Task

As a measure of speed of information processing a typical elementary cognitive task, the so-called Hick reaction time (RT) paradigm, was used. The Hick paradigm is a visual simple and choice RT task in which participants have to react as quickly as possible to an upcoming visual stimulus. This task is based on Hick’s [45] discovery of a linear relationship between an individual’s RT and the binary logarithm of the number of stimulus-response alternatives among which a decision has to be made. In the case of simple RT, no decision between response alternatives is involved (i.e., zero bits of information have to be processed; 0-bit condition). Analogously, deciding between two response alternatives (two-choice RT) requires one binary decision (1-bit condition), while, when four response alternatives are present (four-choice RT), two binary decisions are necessary (2-bit condition). The current version of the Hick paradigm was similar to the one proposed by Neubauer [46], who was concerned with creating a version of this paradigm that is free of potential confounds such as response strategies or changes in visual attention [46,47].

2.3.1. Apparatus and Stimuli

Stimuli were rectangles (2 cm × 1 cm) and plus signs (0.8 cm) presented on a monitor screen. For registration of the participant's responses, an external response panel with four buttons corresponding to the locations of the four rectangles presented under the 2-bit condition was connected to the computer. Responses were recorded with an accuracy of ±1 ms.

2.3.2. Procedure

In the 0-bit condition (no-choice or simple RT), one rectangle was presented in the center of the monitor screen. After a foreperiod varying randomly between 700 and 2000 ms, the imperative stimulus, a plus sign, was presented in the center of the rectangle. The rectangle and the plus sign remained on screen until the participant pressed the designated response button. The 1-bit condition (two-choice RT) was almost identical to the 0-bit condition, except that two rectangles were presented arranged in a row. After a variable foreperiod, the imperative stimulus was presented in one of the two rectangles. Presentation of the imperative stimulus was randomized and balanced. Thus, the imperative stimulus appeared in each of the two rectangles in 50% of the trials. Similarly, in the 2-bit condition (four-choice RT), four rectangles arranged in two rows were displayed on the monitor screen. Again, the imperative stimulus was presented randomly in one of the four rectangles after a variable foreperiod.
The instruction to the participants emphasized to respond as quickly as possible to the imperative stimulus by pressing the response button corresponding to the rectangle with the imperative stimulus. After each correct response, a 200-ms tone was presented immediately after pressing the response button followed by an intertrial interval of 1500 ms. To avoid order effects, the order of conditions was randomized across participants. Each condition consisted of 32 trials preceded by 10 practice trials.
As suggested by Larson and Alderton [18], for each participant and each condition, the fastest and the slowest trial were discarded from further analyses in order to avoid outliers. The remaining 30 trials per condition were ranked from the fastest to the slowest trial. Then, the ranked RTs were divided into six consecutive RT bands with five RTs per band. As dependent variables, mean RT was computed for each band.

3. Results

Mean and standard deviation of unstandardized scores on the twelve intelligence scales are reported in Table 2. The full correlation matrix for the intelligence battery can be downloaded from “Supplementary Files”. In order to obtain an estimate of psychometric g, all psychometric test scores were subjected to a principal components analysis (PCA). Based on a scree test [48,49], PCA yielded only one strong component with an eigenvalue of 4.21 that accounted for more than 35% of total variance. This first unrotated component is commonly considered an estimate of psychometric g [1]. As can be seen from Table 2, all mental tests had substantial positive loadings greater than 0.30 on this component. Apart from the three memory scales, all loadings were greater than 0.59.
Mean and standard deviation of RTs within and across the six RT bands of the three conditions of the Hick RT task are presented in Table 3. The full correlation matrix for all RT measures can be downloaded from “Supplementary Files”. As indicated by a one-way analysis of variance with task conditions as three levels of a repeated-measures factor, mean RT across all six bands increased significantly from the 0-bit to the 2-bit condition, F(2, 488) = 1532.45; p < 0.001; ηp2 = 0.86. All pairwise comparisons were statistically significant (all p < 0.001 after Bonferroni adjustment) confirming that the complexity of the Hick RT task increased monotonically from the 0-bit to the 2-bit condition. Furthermore, the polynomial linear contrast yielded statistical significance, F(1,244) = 2110.09; p < 0.001, corroborating the linear increase of RT from the 0-bit to the 2-bit condition as postulated by Hick’s law [45].
Subsequently, for each condition of the Hick RT task, a one-way analysis of variance with the RT bands as six levels of a repeated-measures factor was computed. In all three RT task conditions, there was a statistically significant main effect of band number; 0-bit condition: F(5, 1220) = 1072.37; p < 0.001; ηp2 = 0.82; 1-bit condition: F(5, 1220) = 1668.49; p < 0.001; ηp2 = 0.87; 2-bit condition: F(5, 1220) = 2078.91; p < 0.001; ηp2 = 0.90. Additional pairwise comparisons revealed that in all three conditions, mean RTs increased significantly with increasing band number (all p < 0.001 after Bonferroni adjustment). The polynomial linear contrasts were significant for all three bands; 0-bit condition: F(1, 244) = 1290.83; p < 0.001; ηp2 = 0.84; 1-bit condition: F(1, 244) = 2035.73; p < 0.001; ηp2 = 0.89; 2-bit condition: F(1, 244) = 2564.45; p < 0.001; ηp2 = 0.91.
For the assessment of the relationship between RT measures and psychometric g, a correlational approach was applied. In a first step, Pearson correlations were computed between mean RT of each band and the first unrotated principal component derived from the twelve intelligence scales as the most comprehensive measure of psychometric g (see Table 4). As can be seen from the filled circles in Figure 1, the (negative) correlation coefficients monotonically increased with the rank of the band in all three Hick RT task conditions.
In order to investigate whether the correlation between the worst performance (RT Band 6) and psychometric g was indeed significantly higher than the correlation between the best performance (RT Band 1) and psychometric g, we compared these two correlations for each Hick RT task condition as suggested by Steiger [50] using the statistical software provided by Lee and Preacher [51]. To avoid alpha inflation, level of statistical significance was adjusted to p = 0.017.
In all three task conditions, the correlation between psychometric g and the worst performance was significantly higher than between psychometric g and the best performance (0-bit condition: z = 3.54; p < 0.001; 1-bit condition: z = 4.58; p < 0.001; 2-bit condition: z = 3.00; p < 0.01). Furthermore, the correlation between psychometric g and worst performance significantly increased from the 0-bit condition to the 1-bit (z = 2.63; p < 0.01) and to the 2-bit condition (z = 2.17; p < 0.017) but not from the 1-bit to the 2-bit condition (z = −0.23; p = 0.82). The correlation between psychometric g and the best performance increased from the 0-bit to the 1-bit (z = 2.32; p < 0.017) and to the 2-bit condition (z = 3.76; p < 0.001) but not from the 1-bit to the 2-bit condition after Bonferroni correction (z = 1.99; p = 0.02).
To compare this pattern of results with the corresponding pattern for a low g-saturated measure of intelligence, we extracted the first unrotated principal component from the three memory tests, which had the lowest loadings on the g factor (see Table 2). The reason for building a composite score instead of taking the test with the lowest factor loading was to increase the reliability of the low g-saturated measure, which should be higher for the composite of the three tests than for each test alone. To make sure that the principal component extracted from the three memory tests still had a low g saturation, a further PCA was computed identical to the initial one, but scores of the three memory tests (BIS OG, BIS ZZ, and BIS WM) were replaced by the factor scores of the principal component extracted from the three memory tests. This composite score loaded on the g factor with 0.45, while the next higher loading was 0.60 for LPS 7 (Space 1). Thus, it can be safely assumed that the g saturation of the three memory tests was still considerably lower compared to all the other intelligence scales. Given a g loading of 0.45, the g factor and the memory composite score shared only 20.3% of common variance.
Then, in a next step, the composite score of the three memory tests was correlated with the mean RTs of the six RT bands within each condition of the Hick RT task. The resulting correlation coefficients are given in Table 4. As can be seen in Figure 1, the correlation between RT Band 6 (worst performance) and the memory composite was significantly lower than the correlation between the same band and psychometric g in all three task conditions (0-bit condition: z = 2.16; p < 0.017; 1-bit condition: z = 3.25; p < 0.01; 2-bit condition: z = 4.00; p < 0.001). On the other hand, the correlation between the best performance (RT Band 1) and psychometric g did not differ from the correlation between the best performance and the memory composite for the 0-bit (z = 0.24; p = 0.81) and the 1-bit condition (z = 1.61; p = 0.11). In the most complex condition though, best performance was more strongly correlated with psychometric g than with the memory composite (z = 2.74; p < 0.01). Only in the 1-bit condition, a strong monotonic increase of the correlation between RT and the memory composite from RT Bands 1 to 6 could be observed resulting in a higher correlation between the memory composite and the worst compared to the best performance (z = 2.58; p < 0.01). For the 0-bit condition (z = 1.63; p = 0.10) as well as for the 2-bit condition (z = 1.17; p = 0.24), the respective correlation coefficients did not differ significantly from each other.
Most importantly, however, task complexity had no influence on the correlation between the memory composite and the worst performance (RT Band 6). A statistically significant difference was obtained neither between the 0-bit and the 1-bit condition (z = 1.38; p = 0.17), the 0-bit and the 2-bit condition (z = 0.28; p = 0.78), nor between the 1-bit and the 2-bit condition (z = −1.14; p = 0.26).
To further address the question of whether the correlational relationship between worst performance trials and the two measures of intelligence increases as a function of task complexity, stepwise multiple regression analyses were performed for the prediction of psychometric g and the memory composite by successively entering worst performance RTs obtained in the 0-, 1-, and 2-bit condition, respectively (see Table 5). These analyses showed that worst performance (RT Band 6) in the 0-bit condition accounted for 5.3% of total variance of psychometric g (R2 in Table 5). When combining worst performance of the 0- and 1-bit condition, 13.4% of total variance in psychometric g could be explained.
This combined effect yielded a statistically significant increase of 8.1% (ΔR2) in explained variance as compared to the portion of 5.3% accounted for by the 0-bit condition alone. Adding the 2-bit condition to the latter two predictor variables resulted in an additional reliable increase in explained variance of 2.3%. Thus, all three levels of task complexity combined accounted for 15.7% of overall variability in psychometric g. In a final step, the unique contributions of the worst performance of the three RT task conditions to the explanation of the variance of psychometric g, were computed. While the unique contribution of worst performance to the prediction of psychometric g in the 0-bit condition was only 0.1%, there were statistically significant unique contributions of 3.3% (p < 0.01) and 2.6% (p < 0.05) for the 1- and 2-bit conditions, respectively.
Unlike in the case of psychometric g, only worst performance of the 0-bit and the 1-bit conditions combined accounted for a statistically significant, although rather small, portion of 2.9% of overall variability in the memory composite score.

4. Discussion

Proceeding from the mental speed approach to intelligence, the present study was designed to systematically assess the influence of g saturation on the validity of the WPR. For this purpose, g saturation of both the speed-of-information-processing task and the measure of psychometric intelligence were experimentally varied. As g saturation of a given RT task is assumed to be positively related to task complexity (e.g., [28,29,30,31]), a Hick RT task with three levels of task complexity was employed in the present study. In order to obtain a highly g-loaded psychometric measure of intelligence, a g factor was derived from 12 intelligence scales. This g factor was contrasted with a memory composite score that showed the smallest factor loading on g and shared only a portion of 20.3% of variance with the g factor.
As predicted by the WPR, the (negative) correlation between worst performance and psychometric g was significantly higher than the correlation between the best performance and psychometric g for all levels of task complexity when the highly g-loaded measure of psychometric intelligence was used. Furthermore, and also consistent with WPR, there was a monotonic increase of the correlations between RT and psychometric g from the slowest to the fastest RT band for all levels of task complexity. In addition, the correlation between worst performance and psychometric g was significantly higher for the more complex 1-bit and 2-bit conditions than for the 0-bit condition of the Hick RT task.
Unlike psychometric g, there was no indication for a general WPR effect when the low g-saturated measure of intelligence was applied. Except for the 1-bit condition, no significant monotonic increase of the correlations between RT and the memory composite score from the slowest to the fastest RT band could be observed. Only in the 1-bit condition, the correlation between worst performance and the memory composite score reached statistical significance and did differ significantly from the correlation between the best performance and the memory composite score. Thus, task complexity had no systematic influence on the correlation between worst performance and psychometric intelligence in the case of a low g-saturated measure of intelligence.
When comparing the relationship between worst performance and intelligence across the two levels of psychometric g saturation, it became evident that, in all three RT task conditions, the correlation between worst performance and the memory composite was significantly lower than the correlation between worst performance and psychometric g. On the other hand, the correlation between best performance and psychometric g did not differ from the correlation between best performance and the memory composite score for the 0-bit and the 1-bit condition. In the most complex condition, however, best performance was more strongly correlated with psychometric g than with the memory composite.
To further evaluate the predictive power of worst performance trials as a function of g saturation of the psychometric measure of intelligence, multiple regression analyses were performed. In addition, these analyses clearly confirmed the crucial role of a highly g-saturated measure of intelligence for the validity of the WPR. When using all three levels of task complexity as predictor variables, worst performance trials explained 15.7% of overall variability in intelligence indexed by the g factor, but accounted for only 3.0% of variance when the low g-saturated memory composite score was used.
Overall, this pattern of results indicates that for the WPR to become effective, a highly g-saturated measure of psychometric intelligence is a necessary condition. The only previous study that also directly investigated the effect of g-saturation of the psychometric measure of intelligence on WPR was performed by Larson and Alderton [18]. These authors also arrived at the conclusion that the validity of the WPR seems to depend on the level of g-saturation of the intelligence measure applied. It should be noted, however, that Larson and Alderton did not extract a g factor but compared a composite index of fluid and crystallized intelligence, a working memory composite score, and performance on a clerical speed test that were subjectively rated as high (index of fluid and crystallized intelligence and working memory composite) or low (clerical speed test) g-saturated measures of intelligence.
Additional converging evidence for the notion that the WPR only applies to highly g-saturated measures of psychometric intelligence can be derived from the fact that almost all studies confirming the WPR used rather highly g-loaded measures of intelligence. For example, Baumeister and Kellas [16] used mean full-scale IQs obtained by the Wechsler Adult Intelligence Scale [52] and the Wechsler Intelligence Scale for Children [53], Kranzler [24] used the Multidimensional Aptitude Battery [33], Diascro and Brody [22] used the Culture-Fair IQ Test Scale 3 [34], while Fernandez et al. [23] and Unsworth et al. [17] used Raven’s Progressive Matrices [35].While a highly g-saturated measure of psychometric intelligence appears to be a conditio sine qua non for the validity of the WPR, the effect of g saturation of the RT task, as indexed by task complexity, provided a less conclusive pattern of results. Jensen’s [3] First Law of Individual Differences implies a much more pronounced increase of the slowest RTs than of the fastest RTs with increasing task complexity. In the present study, however, the observed increase from the 0-bit to the 1-bit condition for the fastest and slowest RTs was virtually identical. Only the transition from the 1-bit to the 2-bit condition showed the predicted much more pronounced increase in RT for the slowest compared to the fastest RT band. Consistent with the prediction derived from the WPR, the (negative) correlation between psychometric g and the worst performance was significantly higher than between psychometric g and the best performance for each level of task complexity. At the same time, the correlation between psychometric g and both best as well as worst performance significantly increased from the 0-bit to the 1-bit condition but remained practically unchanged from the 1-bit to the 2-bit condition. This represents a rather unexpected finding in light of the WPR which suggests a more pronounced correlational relationship between psychometric g and worst performance with increasing task complexity.
As a possible explanation for this break down of the WPR for rather complex RT tasks, Jensen [39] introduced the idea of a U-shaped relation between the RT-g correlation and the level of task complexity (see also [3,54]). According to this account, beyond some optimal level, any further increase in task complexity will induce the use of additional auxiliary cognitive strategies. Furthermore, when task complexity exceeds a certain level, response errors are likely to occur (e.g., [3,55]). Both these factors may hamper a further increase of the correlation between slowest RT and psychometric g from the 1-bit to the 2-bit condition.
Supporting evidence for this notion could be derived from some studies that failed to confirm the validity of the WPR for more complex tasks. For example, Salthouse [21] investigated a sample of adults ranging in age from 18 through 83 years with a set of rather complex RT tasks, such as digit-digit and digit-symbol RT tasks. Fast and slow RTs correlated with intelligence to about the same extent and, thus, did not support the WPR. More recently, Fernandez et al. [23] investigated the influence of task complexity on the WPR in children, young adults, and older adults by means of a simple RT task, a two-choice RT task, and a color-naming Stroop task. While for all three age groups and for all tasks, the WPR could be confirmed, no general effect of task complexity on WPR could be revealed. In fact, an effect of task complexity was shown for children and older adults but not for young adults.
To gain some deeper insight and a better understanding of the influence of task complexity on the validity of WPR, the results of our stepwise multiple regression analysis for the prediction of psychometric g may be helpful. The worst performance trials of the least complex Hick RT task (0-bit condition) accounted for a portion of 5.3% of variance in psychometric g. Adding worst performance of the more complex 1-bit condition as a second predictor variable resulted in an additional substantial gain in predicting power of 8.1%. Comparing this substantial gain to the relatively moderate increase in predicting power of only 2.3% obtained when adding worst performance on the most complex 2-bit condition as a third predictor variable suggested that the relative contribution of task complexity to the explanation of variance in psychometric g cannot be considered a simple linear function.
The comparatively large increase in explained variance by entering the 1-bit condition as a second predictor variable in addition to the 0-bit condition into the regression model indicates that the 1-bit condition and psychometric g share common processes not inherent in the less complex 0-bit condition. Compared to the gain in explained variance by adding the 1-bit condition, the contribution of the more complex 2-bit condition as a third predictor variable to account for variability in psychometric g was substantially smaller. These different gains in predictive power obtained by stepwise multiple regression analysis point to the particular importance of the transition from the simple-RT version to the two-response alternative version of the Hick RT task. More precisely, it was this transition from the 0-bit to the 1-bit condition of the Hick RT task where the influence of increasing task complexity became most clearly evident. This means that already a rather moderate increase in task complexity from a simple to a two-choice RT task caused a marked increase in g saturation with the result that a much larger portion of the still unknown brain processes underlying mental speed and psychometric g were captured by the Hick task.
Another highly intriguing finding, also related to Hick RT task complexity, arose when considering the relationship between total and unique variance explained by each task condition. In the least complex 0-bit condition, worst performance RT accounted for a portion of 5.3% of total variance in psychometric g. Only 2.6% of this portion were uniquely explained by worst performance in simple RT. In contrast, the corresponding portions of unique variance amounted to 24.5% and 20.6% for the 1-bit and 2-bit conditions, respectively. This pattern of results clearly indicates that virtually all processes shared by the simple RT task and psychometric g are also covered by the more complex RT tasks with two (1-bit condition) and four (2-bit condition) response alternatives. In addition, however, each of the two more complex RT tasks also shared more than 20% of unique variance with psychometric g.
This outcome is consistent with the idea of a two-process model of mental speed put forward by Schweizer [56]. In his approach, Schweizer proposed that measures of speed of information processing are composed of both rather basic, sensory-perceptual aspects of speed (such as speed of signal detection) as well as attention-paced aspects. While the basic aspects are considered to be independent of the level of mental activity required to perform the cognitive task, the attention-paced aspects are assumed to vary as a function of the task demands on attentional resources. Both aspects of speed are related to psychometric intelligence but the basic aspects only weakly compared to the attention-paced aspects of speed of information processing [56]. This notion may provide a tentative theoretical framework to account for our results. Simple RT in the 0-bit condition of the Hick task may be mainly controlled by sensory-perceptual aspects of speed but involves only a low level of attention-based aspects of speed of information processing. Thus, RT in the 0-bit condition was related to psychometric intelligence primarily due to the basic, sensory-perceptual aspects of mental speed. The same sensory-perceptual aspects also become effective in the 1-bit and 2-bit conditions of the Hick RT task. Therefore, there was no significant portion of variance in psychometric intelligence uniquely explained by the 0-bit condition. Most importantly, however, the increasing complexity of the Hick RT task enhanced the required attentional demands so that more unique variance in psychometric intelligence was explained by the more complex task conditions.
Although the biological or even psychological basis of the g factor has not been identified yet [57,58], the g factor derived from psychometric tests of intelligence can be considered the outcome of a physical brain feature which enhances neural network efficiency (e.g., [3,59,60]). Against this background, the present finding that predictive power of the WPR increases with increasing g saturation of the psychometric measures of intelligence applied, is consistent with the observed relevance of the g loading for connecting cognitive performance differences and biological data (e.g., [61,62,63]).
In the present study, we applied a traditional approach based on a RT-binning procedure, as proposed by Larson and Alderton [18], to investigate the WPR. This procedure enabled us to easily implement various levels of task complexity and, at the same time, to keep the number of trials rather small. It should be noted though that more sophisticated mathematical models, describing RT distributions comprehensively, as well as multidimensional measurement models provide feasible tools to better control for measurement error and to more systematically connect characteristics of RT distributions to theoretical models. In particular, ex-Gaussian distributions (e.g., [25]), diffusion model approaches (e.g., [19,25,64]), and latent growth curve analysis (e.g., [65]) open up promising avenues for future research on the WPR. Taken together, the findings of the present study provided first direct evidence that the validity of the WPR depends on the level of g saturation of the psychometric measure of intelligence applied. While there was no indication for a general WPR effect when a low g-saturated measure of intelligence was used, the WPR could be confirmed for the highly g-loaded measure of psychometric intelligence. This outcome clearly supports Jensen’s [3] notion that the “WPR phenomenon depends mainly on the g factor rather than on a mixture of abilities including their non-g components” (p. 180). Likewise consistent with the WPR, the correlation between worst performance and psychometric g was significantly higher for the more complex 1-bit and 2-bit conditions than for the 0-bit condition of the Hick RT task. As more complex RT tasks have higher g saturation than less complex versions of the same task and, thus, account for a larger portion of variance in psychometric g, this finding also endorsed the crucial role of g saturation for the validity of the WPR in particular and for the mental speed approach to intelligence in general.

Supplementary Materials

The following are available online at https://www.mdpi.com/2079-3200/4/1/5/s1, Table S1, Correlations intelligence tests; and Table S2, Correlations RT measures.

Author Contributions

Thomas H. Rammsayer conceived and designed the experiments; Thomas H. Rammsayer performed the experiments; Thomas H. Rammsayer and Stefan J. Troche analyzed the data; Stefan J. Troche contributed analysis tools; and Thomas H. Rammsayer and Stefan J. Troche wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jensen, A.R. The g Factor; Praeger Publishers: Westport, CT, USA, 1998. [Google Scholar]
  2. Spearman, C. General intelligence, objectively determined and measured. Am. J. Psychol. 1904, 15, 201–293. [Google Scholar] [CrossRef]
  3. Jensen, A.R. Clocking the Mind: Mental Chronometry and Individual Differences; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  4. Sheppard, L.D.; Vernon, P.A. Intelligence and speed of information-processing: A review of 50 years of research. Personal. Individ. Differ. 2008, 44, 535–551. [Google Scholar] [CrossRef]
  5. Jensen, A.R. The importance of intraindividual variation in reaction time. Personal. Individ. Differ. 1992, 13, 869–881. [Google Scholar] [CrossRef]
  6. Jensen, A.R. Reaction time and psychometric g. In A Model for Intelligence; Eysenck, H.J., Ed.; Springer: Berlin, Germany, 1982; pp. 93–132. [Google Scholar]
  7. Jensen, A.R. Individual differences in the hick paradigm. In Speed of Information-Processing and Intelligence; Vernon, P.A., Ed.; Ablex: Norwood, NJ, USA, 1987; pp. 101–175. [Google Scholar]
  8. Coyle, T.R. IQ is related to the worst performance rule in a memory task involving children. Intelligence 2001, 29, 117–129. [Google Scholar] [CrossRef]
  9. Eysenck, H.J. Introduction. In A Model for Intelligence; Eysenck, H.J., Ed.; Springer: Berlin, Germany, 1982; pp. 1–10. [Google Scholar]
  10. Eysenck, H.J. Intelligence and reaction time: The contribution of Arthur Jensen. In Arthur Jensen: Consensus and Controversy; Modgil, S., Modgil, C., Eds.; Falmer Press: New York, NY, USA, 1987; pp. 337–349. [Google Scholar]
  11. Lorås, H.; Stensdotter, A.K.; Öhberg, F.; Sigmundsson, H. Individual differences in motor timing and its relation to cognitive and fine motor skills. PLoS ONE 2013, 8, e69353. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Rammsayer, T.H.; Brandler, S. On the relationship between general fluid intelligence and psychophysical indicators of temporal resolution in the brain. J. Res. Personal. 2002, 36, 507–530. [Google Scholar] [CrossRef]
  13. Rammsayer, T.H.; Brandler, S. Performance on temporal information processing as an index of general intelligence. Intelligence 2007, 35, 123–139. [Google Scholar] [CrossRef]
  14. Ullén, F.; Forsman, L.; Blom, Ö.; Karabanov, A.; Madison, G. Intelligence and variability in a simple timing task share neural substrates in the prefrontal white matter. J. Neurosci. 2008, 28, 4238–4243. [Google Scholar] [CrossRef] [PubMed]
  15. Ullén, F.; Söderlund, T.; Kääriä, L.; Madison, G. Bottom–up mechanisms are involved in the relation between accuracy in timing tasks and intelligence—Further evidence using manipulations of state motivation. Intelligence 2012, 40, 100–106. [Google Scholar] [CrossRef]
  16. Baumeister, A.A.; Kellas, G. Distributions of reaction times of retardates and normals. Am. J. Ment. Defic. 1968, 72, 715–718. [Google Scholar] [PubMed]
  17. Unsworth, N.; Redick, T.S.; Lakey, C.E.; Young, D.L. Lapses in sustained attention and their relation to executive control and fluid abilities: An individual differences investigation. Intelligence 2010, 38, 111–122. [Google Scholar] [CrossRef]
  18. Larson, G.E.; Alderton, D.L. Reaction time variability and intelligence: A “worst performance” analysis of individual differences. Intelligence 1990, 14, 309–325. [Google Scholar] [CrossRef]
  19. Van Ravenzwaaij, D.; Brown, S.; Wagenmakers, E.J. An integrated perspective on the relation between response speed and intelligence. Cognition 2011, 119, 381–393. [Google Scholar] [CrossRef] [PubMed]
  20. Luce, D. Response Times: Their Role in Inferring Elementary Mental Organization; Oxford University Press: New York, NY, USA, 1986. [Google Scholar]
  21. Salthouse, T.A. Relations of successive percentiles of reaction time distributions to cognitive variables and adult age. Intelligence 1998, 26, 153–166. [Google Scholar] [CrossRef]
  22. Diascro, M.N.; Brody, N. Serial versus parallel processing in visual search tasks and IQ. Personal. Individ. Differ. 1993, 14, 243–245. [Google Scholar] [CrossRef]
  23. Fernandez, S.; Fagot, D.; Dirk, J.; de Ribaupierre, A. Generalization of the worst performance rule across the lifespan. Intelligence 2014, 42, 31–43. [Google Scholar] [CrossRef]
  24. Kranzler, J.H. A test of Larson and Alderton’s (1990) worst performance rule of reaction time variability. Personal. Individ. Differ. 1992, 13, 255–261. [Google Scholar] [CrossRef]
  25. Schmiedek, F.; Oberauer, K.; Wilhelm, O.; Süß, H.M.; Wittmann, W.W. Individual differences in components of reaction time distributions and their relations to working memory and intelligence. J. Exp. Psychol. Gen. 2007, 136, 414–429. [Google Scholar] [CrossRef] [PubMed]
  26. Ratcliff, R.; Thapar, A.; McKoon, G. Individual differences, aging, and IQ in two-choice tasks. Cogn. Psychol. 2010, 60, 127–157. [Google Scholar] [CrossRef] [PubMed]
  27. Madison, G.; Forsman, L.; Blom, Ö.; Karbanov, A.; Ullén, F. Correlations between intelligence and components of serial timing variability. Intelligence 2009, 37, 68–75. [Google Scholar] [CrossRef]
  28. Kranzler, J.H.; Whang, P.A.; Jensen, A.R. Task complexity and the speed and efficiency of elemental information processing: Another look at the nature of intellectual giftedness. Contemp. Educ. Psychol. 1994, 19, 447–459. [Google Scholar] [CrossRef]
  29. Larson, G.E.; Merritt, C.R.; Williams, S.E. Information processing and intelligence: Some implications of task complexity. Intelligence 1988, 12, 131–147. [Google Scholar] [CrossRef]
  30. Stankov, L. Complexity, metacognition, and fluid intelligence. Intelligence 2000, 28, 121–143. [Google Scholar] [CrossRef]
  31. Vernon, P.A. Intelligence and neural efficiency. In Current Topics in Human Intelligence: Individual Differences and Cognition; Detterman, D.K., Ed.; Ablex: Norwood, NJ, USA, 1993; Volume 3, pp. 171–187. [Google Scholar]
  32. Treisman, A.M.; Gormican, S. Feature analysis in early vision: Evidence from search asymmetries. Psychol. Rev. 1988, 95, 15–48. [Google Scholar] [CrossRef] [PubMed]
  33. Jackson, D.N. Multidimensional Aptitude Battery Manual; Research Psychologist Press: Port Huron, MI, USA, 1984. [Google Scholar]
  34. Cattell, R.B. Culture Fair Intelligence Test Manual; Institute for Personality and Ability Testing: Champaign, IL, USA, 1973. [Google Scholar]
  35. Raven, J.C.; Court, J.H.; Raven, J. Progressive Matrices Standards (PM38); EAP: Paris, France, 1998. [Google Scholar]
  36. Thurstone, L.L. Primary Mental Abilities; University of Chicago Press: Chicago, IL, USA, 1938. [Google Scholar]
  37. Thurstone, L.L. A Factorial Study of Perception; University of Chicago Press: Chicago, IL, USA, 1944; Volume 38. [Google Scholar]
  38. Helmbold, N.; Troche, S.; Rammsayer, T. Processing of temporal and non-temporal information as predictors of psychometric intelligence: A structural-equation-modeling approach. J. Personal. 2007, 75, 985–1006. [Google Scholar] [CrossRef] [PubMed]
  39. Jensen, A.R. Why is reaction time correlated with psychometric g? Curr. Dir. Psychol. Sci. 1993, 2, 53–56. [Google Scholar] [CrossRef]
  40. Brody, N. Intelligence; Academic Press: San Diego, CA, USA, 1992. [Google Scholar]
  41. Cattell, R.B. Culture Fair Intelligence Test, Scale 3. A measure of “g”; Institute for Personality and Ability Testing: Champaign, IL, USA, 1963. [Google Scholar]
  42. Weiß, R.H. Grundintelligenztest CFT 3 Skala 3; Westermann: Braunschweig, Germany, 1971. (In German) [Google Scholar]
  43. Horn, W. Leistungsprüfsystem; Hogrefe: Göttingen, Germany, 1983. (In German) [Google Scholar]
  44. Jäger, A.O.; Süß, H.M.; Beauducel, A. Berliner Intelligenzstruktur-Test; Hogrefe: Göttingen, Germany, 1997. (In German) [Google Scholar]
  45. Hick, W.E. On the rate of gain of information. Quart. J. Exp. Psychol. 1952, 4, 11–26. [Google Scholar] [CrossRef]
  46. Neubauer, A.C. Intelligence and RT: A modified Hick paradigm and a new RT paradigm. Intelligence 1991, 15, 175–193. [Google Scholar] [CrossRef]
  47. Longstreth, L.E. Jensen’s reaction time investigations of intelligence: A critique. Intelligence 1984, 8, 139–160. [Google Scholar] [CrossRef]
  48. Cattell, R.B. The scree test for the number of factors. Multivar. Behav. Res. 1966, 1, 245–276. [Google Scholar] [CrossRef] [PubMed]
  49. Cattell, R.B.; Vogelmann, S. A comprehensive trial of the scree and KG criteria for determining the number of factors. Multivar. Behav. Res. 1977, 12, 289–325. [Google Scholar] [CrossRef] [PubMed]
  50. Steiger, J.H. Tests for comparing elements of a correlation matrix. Psychol. Bull. 1980, 87, 245–251. [Google Scholar] [CrossRef]
  51. Lee, I.A.; Preacher, K.J. Calculation for the Test of the Difference between Two Dependent Correlations with One Variable in Common 2013. Computer Software. Available online: http://quantpsy.org/corrtest/corrtest3.htm (accessed on 8 October 2015).
  52. Wechsler, D. Manual for the Wechsler Adult Intelligence Scale; Psychological Corporation: New York, NY, USA, 1955. [Google Scholar]
  53. Wechsler, D. Wechsler Intelligence Scale for Children; Psychological Corporation: New York, NY, USA, 1949. [Google Scholar]
  54. Lindley, R.H.; Wilson, S.M.; Smith, W.R.; Bathurst, K. Reaction time (RT) and IQ: Shape of the task complexity function. Personal. Individ. Differ. 1995, 18, 339–345. [Google Scholar] [CrossRef]
  55. Schweizer, K. Complexity of information processing and the speed–ability relationship. J. Gen. Psychol. 1998, 125, 329–391. [Google Scholar] [CrossRef]
  56. Schweizer, K. The relationship of attention and intelligence. In Handbook of Individual Differences in Cognition: Attention, Memory, and Executive Control; Gruszka, A., Matthews, G., Szymura, B., Eds.; Springer: New York, NY, USA, 2010; pp. 247–262. [Google Scholar]
  57. Ackerman, P.L.; Beier, M.E.; Boyle, M.O. Working memory and intelligence: The same or different constructs? Psychol. Bull. 2005, 131, 30–60. [Google Scholar] [CrossRef] [PubMed]
  58. Luciano, M.; Posthuma, D.; Wright, M.J.; de Geus, E.J.C.; Smith, G.A.; Geffen, G.M.; Boomsma, D.I.; Martin, N.G. Perceptual speed does not cause intelligence and intelligence does not cause perceptual speed. Biol. Psychol. 2005, 70, 1–8. [Google Scholar] [CrossRef] [PubMed]
  59. Anderson, B. G explained. Med. Hypotheses 1995, 45, 602–604. [Google Scholar] [CrossRef]
  60. Jensen, A.R. The g factor: Psychometrics and biology. In The Nature of Intelligence; Bock, G.R., Goode, J.A., Webb, K., Eds.; Wiley: Chichester, UK, 2000; pp. 37–57. [Google Scholar]
  61. Coloma, R.; Jung, R.E.; Haier, R.J. Distributed brain sites for the g-factor of intelligence. NeuroImage 2006, 31, 1359–1365. [Google Scholar] [CrossRef] [PubMed]
  62. Karama, S.; Colom, R.; Johnson, W.; Deary, I.J.; Haier, R.; Waber, D.P.; Lepage, C.; Ganjavi, H.; Jung, R.; Evans, A.C.; et al. Cortical thickness correlates of specific cognitive performance accounted for by the general factor of intelligence in healthy children aged 6 to 18. Neuroimage 2011, 55, 1443–1453. [Google Scholar] [CrossRef] [PubMed]
  63. Román, F.J.; Abad, F.J.; Sergio, E.; Miguel, B.; Kenia, M.; Juan, Á.L.; María Ángeles, Q.; Sherif, K.; Haier, R.J.; Roberto, C.; et al. Reversed hierarchy in the brain for general and specific cognitive abilities: A morphometric analysis. Hum. Brain Mapp. 2014, 35, 3805–3818. [Google Scholar] [CrossRef] [PubMed]
  64. Ratcliff, R.; Schmiedek, F.; McKoon, G. A diffusion model explanation of the worst performance rule for reaction time and IQ. Intelligence 2008, 36, 10–17. [Google Scholar] [CrossRef] [PubMed]
  65. Borter, N.; Troche, S.J.; Dodonova, Y.; Rammsayer, T.H. A latent growth curve (LGC) analysis to model task demands and the Worst Performance Rule simultaneously. In Proceedings of the European Mathematical Psychology Group Meeting, Tübingen, Germany, 30 July–1 August 2014.
Figure 1. Correlations between successive RT bands (from fastest to slowest) and the high (g factor) and low (memory composite) g-saturated measures of intelligence in the three Hick RT task conditions.
Figure 1. Correlations between successive RT bands (from fastest to slowest) and the high (g factor) and low (memory composite) g-saturated measures of intelligence in the three Hick RT task conditions.
Jintelligence 04 00005 g001
Table 1. Description of psychometric tests applied for measuring primary mental abilities.
Table 1. Description of psychometric tests applied for measuring primary mental abilities.
Intelligence ScaleAbilityTask Characteristics
LPS 1Verbal comprehensionDetection of typographical errors in nouns
LPS 5Word fluencyAnagrams
LPS 7Space 1Mental rotation
LPS 9Space 2Three-dimensional interpretation of two-dimensionally presented objects
LPS 10Flexibility of ClosureDetection of single elements in complex objects
LPS 14Perceptual speedComparison of two columns of letters and digits
CFTReasoningEvaluation of figural arrangements based on inductive and deductive thinking
BIS XGNumber 1Detection of numbers exceeding the proceeding number by “three”
BIS SCNumber 2Solving of complex mathematical problems by means of simple mathematical principles
BIS OGMemory (figural)Recognition of buildings on a city map
BIS ZZMemory (numerical)Reproduction of two-digit numbers
BIS WMMemory (verbal)Reproduction of previously memorized nouns
Note: “Ability” refers to primary mental abilities according to Thurstone [36,37].
Table 2. Mean and standard deviation (SD) of the unstandardized scores on the six subtests of the Leistungsprüfsystem (LPS), the five subtests of the Berlin Intelligence Structure Test (BIS), and Cattell’s Culture Fair Test (CFT) as well as g loadings of each test.
Table 2. Mean and standard deviation (SD) of the unstandardized scores on the six subtests of the Leistungsprüfsystem (LPS), the five subtests of the Berlin Intelligence Structure Test (BIS), and Cattell’s Culture Fair Test (CFT) as well as g loadings of each test.
Intelligence ScaleMeanSDg Loading
LPS 9 (Space 2)28.956.050.691
LPS 10 (Flexibility of Closure)32.046.170.670
LPS 5 (Word fluency)29.358.050.669
CFT (Reasoning)26.105.230.660
BIS XG (Number 1)22.567.290.653
LPS 14 (Perceptual speed)25.164.820.649
BIS SC (Number 2)3.962.110.633
LPS 1 (Verbal comprehension)23.376.590.633
LPS 7 (Space 1)22.267.230.591
BIS OG (Memory, figural)15.534.540.428
BIS ZZ (Memory, numerical)7.352.200.345
BIS WM (Memory, verbal)8.132.470.317
Table 3. Mean RT and standard deviation in ms for the six RT bands of each condition of the Hick RT task.
Table 3. Mean RT and standard deviation in ms for the six RT bands of each condition of the Hick RT task.
RT Band0-Bit Condition1-Bit Condition2-Bit Condition
MSDMSDMSD
Band 120723.225731.531644.4
Band 222325.528034.835150.7
Band 323629.529938.037756.1
Band 425435.031941.740461.1
Band 528143.834447.843868.6
Band 634567.739263.149984.2
Across bands25834.131540.739858.8
Table 4. Pearson correlations (rxy) between mean RT of each band and across all six bands and the g factor and the memory composite as a high and low g-saturated measure of intelligence, respectively.
Table 4. Pearson correlations (rxy) between mean RT of each band and across all six bands and the g factor and the memory composite as a high and low g-saturated measure of intelligence, respectively.
RT Bandg FactorMemory Composite
rxyp Valuerxyp Value
0-bit condition
Band 1−0.0050.940.0100.88
Band 2−0.0270.68−0.0070.91
Band 3−0.0490.45−0.0230.72
Band 4−0.0780.23−0.0200.76
Band 5−0.1470.03−0.0390.54
Band 6−0.230<0.001−0.0950.14
Across bands−0.1320.04−0.0460.47
1-bit condition
Band 1−0.1380.04−0.0360.57
Band 2−0.1810.005−0.0690.28
Band 3−0.250<0.001−0.1070.09
Band 4−0.286<0.001−0.1220.06
Band 5−0.317<0.001−0.1470.02
Band 6−0.366<0.001−0.1700.01
Across bands−0.288<0.001−0.1250.05
2-bit condition
Band 1−0.233<0.001−0.0620.34
Band 2−0.278<0.001−0.1010.12
Band 3−0.313<0.001−0.1280.05
Band 4−0.331<0.001−0.1330.04
Band 5−0.332<0.001−0.1220.06
Band 6−0.355<0.001−0.1120.08
Across bands−0.326<0.001−0.1160.07
Table 5. Results of stepwise regression analyses for the prediction of psychometric g and the memory composite score.
Table 5. Results of stepwise regression analyses for the prediction of psychometric g and the memory composite score.
Predictor Variable (s)RR2F Valuep ValueΔR2ΔF Valuep Value
Psychometric g
0 bit0.2300.05313.520.001
0 + 1 bit0.3660.13418.730.0010.08122.730.001
0 + 1 + 2 bit0.3960.15714.920.0010.0236.450.05
Memory composite score
0 bit0.0950.0092.200.14
0 + 1 bit0.1710.0293.660.030.0205.080.03
0 + 1 + 2 bit0.1710.0292.430.070.0000.000.99

Share and Cite

MDPI and ACS Style

Rammsayer, T.H.; Troche, S.J. Validity of the Worst Performance Rule as a Function of Task Complexity and Psychometric g: On the Crucial Role of g Saturation. J. Intell. 2016, 4, 5. https://doi.org/10.3390/jintelligence4010005

AMA Style

Rammsayer TH, Troche SJ. Validity of the Worst Performance Rule as a Function of Task Complexity and Psychometric g: On the Crucial Role of g Saturation. Journal of Intelligence. 2016; 4(1):5. https://doi.org/10.3390/jintelligence4010005

Chicago/Turabian Style

Rammsayer, Thomas H., and Stefan J. Troche. 2016. "Validity of the Worst Performance Rule as a Function of Task Complexity and Psychometric g: On the Crucial Role of g Saturation" Journal of Intelligence 4, no. 1: 5. https://doi.org/10.3390/jintelligence4010005

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop