Feature Papers in Psychometrics and Educational Measurement

A special issue of Psych (ISSN 2624-8611). This special issue belongs to the section "Psychometrics and Educational Measurement".

Deadline for manuscript submissions: closed (20 May 2023) | Viewed by 10995

Special Issue Editor


E-Mail Website
Guest Editor
IPN – Leibniz Institute for Science and Mathematics Education, University of Kiel, Olshausenstraße 62, 24118 Kiel, Germany
Interests: item response models; linking; methodology in large-scale assessments; multilevel models; missing data; cognitive diagnostic models; Bayesian methods and regularization

Special Issue Information

Dear Colleagues,

This Special Issue,  “Feature Papers in Psychometrics and Educational Measurement”, aims to collect high-quality original articles, reviews, research notes, and short communications in the cutting-edge field of psychometrics and educational measurement. We encourage the Editorial Board Members of Psych to contribute feature papers reflecting the latest progress in their field of research or to invite relevant experts and colleagues to do so.

Dr. Alexander Robitzsch
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Psych is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 269 KiB  
Editorial
Editorial to the Special Issue “Feature Papers in Psychometrics and Educational Measurement”
by Alexander Robitzsch
Psych 2023, 5(3), 1001-1003; https://doi.org/10.3390/psych5030066 - 12 Sep 2023
Viewed by 866
Abstract
The Special Issue “Feature Papers in Psychometrics and Educational Measurement” (https://www [...] Full article
(This article belongs to the Special Issue Feature Papers in Psychometrics and Educational Measurement)

Research

Jump to: Editorial

13 pages, 587 KiB  
Article
Evaluating the Effect of Planned Missing Designs in Structural Equation Model Fit Measures
by Paula C. R. Vicente
Psych 2023, 5(3), 983-995; https://doi.org/10.3390/psych5030064 - 6 Sep 2023
Cited by 6 | Viewed by 1310
Abstract
In a planned missing design, the nonresponses occur according to the researcher’s will, with the goal of increasing data quality and avoiding overly extensive questionnaires. When adjusting a structural equation model to the data, there are different criteria to evaluate how the theoretical [...] Read more.
In a planned missing design, the nonresponses occur according to the researcher’s will, with the goal of increasing data quality and avoiding overly extensive questionnaires. When adjusting a structural equation model to the data, there are different criteria to evaluate how the theoretical model fits the observed data, with the root mean square error of approximation (RMSEA), standardized root mean square residual (SRMR), comparative fit index (CFI) and Tucker–Lewis index (TLI) being the most common. Here, I explore the effect of the nonresponses due to a specific planned missing design—the three-form design—on the mentioned fit indices when adjusting a structural equation model. A simulation study was conducted with correctly specified model and one model with misspecified correlation between factors. The CFI, TLI and SRMR indices are affected by the nonresponses, particularly with small samples, low factor loadings and numerous observed variables. The existence of nonresponses when considering misspecified models causes unacceptable values for all the four fit indexes under analysis, namely when a strong correlation between factors is considered. The results shown here were performed with the simsem package in R and the full information maximum-likelihood method was used for handling missing data during model fitting. Full article
(This article belongs to the Special Issue Feature Papers in Psychometrics and Educational Measurement)
Show Figures

Figure 1

15 pages, 275 KiB  
Article
Scale Type Revisited: Some Misconceptions, Misinterpretations, and Recommendations
by Leah Feuerstahler
Psych 2023, 5(2), 234-248; https://doi.org/10.3390/psych5020018 - 4 Apr 2023
Cited by 3 | Viewed by 3205
Abstract
Stevens’s classification of scales into nominal, ordinal, interval, and ratio types is among the most controversial yet resilient ideas in psychological and educational measurement. In this essay, I challenge the notion that scale type is essential for the development of measures in these [...] Read more.
Stevens’s classification of scales into nominal, ordinal, interval, and ratio types is among the most controversial yet resilient ideas in psychological and educational measurement. In this essay, I challenge the notion that scale type is essential for the development of measures in these fields. I highlight how the concept of scale type, and of interval-level measurement in particular, is variously interpreted by many researchers. These (often unstated) differences in perspectives lead to confusion about what evidence is appropriate to demonstrate interval-level measurement, as well as the implications of scale type for research in practice. I then borrow from contemporary ideas in the philosophy of measurement to demonstrate that scale type can only be established in the context of well-developed theory and through experimentation. I conclude that current notions of scale type are of limited use, and that scale type ought to occupy a lesser role in psychometric discourse and pedagogy. Full article
(This article belongs to the Special Issue Feature Papers in Psychometrics and Educational Measurement)
13 pages, 1115 KiB  
Article
A Failed Cross-Validation Study on the Relationship between LIWC Linguistic Indicators and Personality: Exemplifying the Lack of Generalizability of Exploratory Studies
by José Ángel Martínez-Huertas, José David Moreno, Ricardo Olmos, Alejandro Martínez-Mingo and Guillermo Jorge-Botana
Psych 2022, 4(4), 803-815; https://doi.org/10.3390/psych4040059 - 13 Oct 2022
Cited by 3 | Viewed by 2379
Abstract
(1) Background: Previous meta-analytic research found small to moderate relationships between the Big Five personality traits and different linguistic computational indicators. However, previous studies included multiple linguistic indicators to predict personality from an exploratory framework. The aim of this study was to conduct [...] Read more.
(1) Background: Previous meta-analytic research found small to moderate relationships between the Big Five personality traits and different linguistic computational indicators. However, previous studies included multiple linguistic indicators to predict personality from an exploratory framework. The aim of this study was to conduct a cross-validation study analyzing the relationships between language indicators and personality traits to test the generalizability of previous results; (2) Methods: 643 Spanish undergraduate students were tasked to write a self-description in 500 words (which was evaluated with the LIWC) and to answer a standardized Big Five questionnaire. Two different analytical approaches using multiple linear regression were followed: first, using the complete data and, second, by conducting different cross-validation studies; (3) Results: The results showed medium effect sizes in the first analytical approach. On the contrary, it was found that language and personality relationships were not generalizable in the cross-validation studies; (4) Conclusions: We concluded that moderate effect sizes could be obtained when the language and personality relationships were analyzed in single samples, but it was not possible to generalize the model estimates to other samples. Thus, previous exploratory results found on this line of research appear to be incompatible with a nomothetic approach. Full article
(This article belongs to the Special Issue Feature Papers in Psychometrics and Educational Measurement)
Show Figures

Figure 1

18 pages, 2045 KiB  
Article
Examining and Improving the Gender and Language DIF in the VERA 8 Tests
by Güler Yavuz Temel, Christian Rietz, Maya Machunsky and Regina Bedersdorfer
Psych 2022, 4(3), 357-374; https://doi.org/10.3390/psych4030030 - 6 Jul 2022
Cited by 1 | Viewed by 2165
Abstract
The purpose of this study was to examine and improve differential item functioning (DIF) across gender and language groups in the VERA 8 tests. We used multigroup concurrent calibration with full and partial invariance based on the Rasch and two-parameter logistic (2PL) models, [...] Read more.
The purpose of this study was to examine and improve differential item functioning (DIF) across gender and language groups in the VERA 8 tests. We used multigroup concurrent calibration with full and partial invariance based on the Rasch and two-parameter logistic (2PL) models, and classified students into proficiency levels based on their test scores and previously defined cut scores. The results indicated that some items showed gender- and language-specific DIF when using the Rasch model, but we did not detect large misfit items (suspected as DIF) when using the 2PL model. When the item parameters were estimated using the 2PL model with partial invariance assumption (PI-2PL), only small or negligible misfit items were found in the overall tests for both groups. It is argued in this study that the 2PL model should be preferred because both of its approaches provided less bias. However, especially in the presence of unweighted sample sizes of German and non-German students, the non-German students had the highest misfit item proportions. Although the items with medium or small misfit did not have a significant effect on the scores and performance classifications, the items with large misfit changed the proportions of students at the highest and lowest performance levels. Full article
(This article belongs to the Special Issue Feature Papers in Psychometrics and Educational Measurement)
Show Figures

Figure 1

Back to TopTop