Next Issue
Volume 4, June
Previous Issue
Volume 3, December
 
 

J. Intell., Volume 4, Issue 1 (March 2016) – 5 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
496 KiB  
Article
Validity of the Worst Performance Rule as a Function of Task Complexity and Psychometric g: On the Crucial Role of g Saturation
by Thomas H. Rammsayer and Stefan J. Troche
J. Intell. 2016, 4(1), 5; https://doi.org/10.3390/jintelligence4010005 - 16 Mar 2016
Cited by 13 | Viewed by 7022
Abstract
Within the mental speed approach to intelligence, the worst performance rule (WPR) states that the slower trials of a reaction time (RT) task reveal more about intelligence than do faster trials. There is some evidence that the validity of the WPR may depend [...] Read more.
Within the mental speed approach to intelligence, the worst performance rule (WPR) states that the slower trials of a reaction time (RT) task reveal more about intelligence than do faster trials. There is some evidence that the validity of the WPR may depend on high g saturation of both the RT task and the intelligence test applied. To directly assess the concomitant influence of task complexity, as an indicator of task-related g load, and g saturation of the psychometric measure of intelligence on the WPR, data from 245 younger adults were analyzed. To obtain a highly g-loaded measure of intelligence, psychometric g was derived from 12 intelligence scales. This g factor was contrasted with the mental ability scale that showed the smallest factor loading on g. For experimental manipulation of g saturation of the mental speed task, three versions of a Hick RT task with increasing levels of task complexity were applied. While there was no indication for a general WPR effect when a low g-saturated measure of intelligence was used, the WPR could be confirmed for the highly g-loaded measure of intelligence. In this latter condition, the correlation between worst performance and psychometric g was also significantly higher for the more complex 1-bit and 2-bit conditions than for the 0-bit condition of the Hick task. Our findings clearly indicate that the WPR depends primarily on the g factor and, thus, only holds for the highly g-loaded measure of psychometric intelligence. Full article
(This article belongs to the Special Issue Mental Speed and Response Times in Cognitive Tests)
Show Figures

Figure 1

201 KiB  
Editorial
The Gift that Keeps on Giving—But for How Long?
by Robert J. Sternberg
J. Intell. 2016, 4(1), 4; https://doi.org/10.3390/jintelligence4010004 - 04 Mar 2016
Cited by 2 | Viewed by 11170
Abstract
Some gifts keep on giving.[...] Full article
(This article belongs to the Special Issue Challenges in Intelligence Testing)
276 KiB  
Concept Paper
Contextual Responsiveness: An Enduring Challenge for Educational Assessment in Africa
by Robert Serpell and Barnabas Simatende
J. Intell. 2016, 4(1), 3; https://doi.org/10.3390/jintelligence4010003 - 17 Feb 2016
Cited by 11 | Viewed by 9667
Abstract
Numerous studies in Africa have found that indigenous conceptualization of intelligence includes dimensions of social responsibility and reflective deliberation, in addition to the dimension of cognitive alacrity emphasized in most intelligence tests standardized in Western societies. In contemporary societies undergoing rapid socio-cultural and [...] Read more.
Numerous studies in Africa have found that indigenous conceptualization of intelligence includes dimensions of social responsibility and reflective deliberation, in addition to the dimension of cognitive alacrity emphasized in most intelligence tests standardized in Western societies. In contemporary societies undergoing rapid socio-cultural and politico-economic change, the technology of intelligence testing has been widely applied to the process of educational selection. Current applications in Zambia rely exclusively on Western style tests and fail to respond to some enduring cultural preoccupations of many parents, educators and policymakers. We discuss how recent and ongoing research addresses the challenges of eco-culturally responsive assessment with respect to assessment of intellectual functions in early childhood, monitoring initial literacy acquisition in middle childhood, and selection for admission to secondary and tertiary education. We argue that the inherent bias of normative tests can only be justified politically if a compelling theoretical account is available of how the construct of intelligence relates to learning and how opportunities for learning are distributed through educational policy. While rapid social change gives rise to demands for new knowledge and skills, assessment of intellectual functions will be more adaptive in contemporary Zambian society if it includes the dimensions of reflection and social responsibility. Full article
(This article belongs to the Special Issue Challenges in Intelligence Testing)
Show Figures

Graphical abstract

835 KiB  
Article
Preventing Response Elimination Strategies Improves the Convergent Validity of Figural Matrices
by Nicolas Becker, Florian Schmitz, Anke M. Falk, Jasmin Feldbrügge, Daniel R. Recktenwald, Oliver Wilhelm, Franzis Preckel and Frank M. Spinath
J. Intell. 2016, 4(1), 2; https://doi.org/10.3390/jintelligence4010002 - 06 Feb 2016
Cited by 27 | Viewed by 8027
Abstract
Several studies have shown that figural matrices can be solved with one of two strategies: (1) Constructive matching consisting of cognitively generating an idealized response, which is then compared with the options provided by the response format; or (2) Response elimination consisting of [...] Read more.
Several studies have shown that figural matrices can be solved with one of two strategies: (1) Constructive matching consisting of cognitively generating an idealized response, which is then compared with the options provided by the response format; or (2) Response elimination consisting of comparing the response format with the item stem in order to eliminate incorrect responses. A recent study demonstrated that employing a response format that reduces response elimination strategies results in higher convergent validity concerning general intelligence. In this study, we used the construction task, which works entirely without distractors because the solution has to be generated in a computerized testing environment. Therefore, response elimination is completely prevented. Our results show that the convergent validity of general intelligence and working memory capacity when using a test employing the construction task is substantially higher than when using tests employing distractors that followed construction strategies used in other studies. Theoretical as well as practical implications of this finding are discussed. Full article
Show Figures

Figure 1

350 KiB  
Article
Integrating Hot and Cool Intelligences: Thinking Broadly about Broad Abilities
by W. Joel Schneider, John D. Mayer and Daniel A. Newman
J. Intell. 2016, 4(1), 1; https://doi.org/10.3390/jintelligence4010001 - 29 Jan 2016
Cited by 21 | Viewed by 14940
Abstract
Although results from factor-analytic studies of the broad, second-stratum abilities of human intelligence have been fairly consistent for decades, the list of broad abilities is far from complete, much less understood. We propose criteria by which the list of broad abilities could be [...] Read more.
Although results from factor-analytic studies of the broad, second-stratum abilities of human intelligence have been fairly consistent for decades, the list of broad abilities is far from complete, much less understood. We propose criteria by which the list of broad abilities could be amended and envision alternatives for how our understanding of the hot intelligences (abilities involving emotionally-salient information) and cool intelligences (abilities involving perceptual processing and logical reasoning) might be integrated into a coherent theoretical framework. Full article
(This article belongs to the Special Issue Challenges in Intelligence Testing)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop