Skip to main content

EDITORIAL article

Front. Psychol., 29 May 2017
Sec. Quantitative Psychology and Measurement
This article is part of the Research Topic Fitting psychometric models: issues and new developments View all 9 articles

Editorial: Fitting Psychometric Models: Issues and New Developments

  • Counseling, Quantitative Methods, and Special Education, Southern Illinois University, Carbondale, IL, United States

Test theory provides a framework to evaluate the psychometric properties of an instrument, such as item analysis, test development, test-score equating, and differential function analysis. The theory relies on formulating a statistical model to specify the relationship among a set of test concepts while making certain assumptions about these concepts and their relationships. Hence, it is essential to understand the conditions and assumptions that are necessary for an accurate estimation of the model and hence an adequate fit to the data before the application of a test theory and related models.

The year of 1904 where Thorndike published the first book on test theory, An Introduction to the Theory and Social Measurement, marks the beginning of the development of classical test theory, which has provided the theoretical foundation for educational and psychological measurement in the twentieth century (Thorndike, 1940). In the past five decades, classical test theory has been rapidly expanded in various directions (Crocker and Algina, 1986). Specifically, as the focus in data analysis is moving from univariate to multivariate procedures, the statistical modeling of test data is becoming more complex involving structural equation modeling (SEM), or modeling with modern test theories such as item response theory (IRT) and generalizability theory. Fitting a complex psychometric model relies on the ability to accurately estimate the model parameters, which can be realized with the availability of enhanced computational technology and the emergence of advanced statistical estimation methods, such as the weighted least squares (WLS), the expectation-maximization algorithm, and the Markov chain Monte Carlo (MCMC) simulation techniques.

This research topic brings together a selection of insightful papers that focus on fitting psychometric models for polytomous or dichotomous responses that often appear in aptitude, achievement, personality, and interest measures used in education and psychology. Specifically, the seven articles published in this research topic demonstrate the current development in classical and modern test theories where more complicated modeling of test data is realized via the use of SEM or IRT, with the focuses being (1) issues and practices associated with fitting or estimation of an existing psychometric model, (2) applications of the test theory and models to real data problems, and (3) proposals of new test models and/or methods that offer advantages not realized with existing ones.

With respect to individual papers, Ropovik and Barendse et al. focused on model-data fit in the context of SEM, with the former calling attention to the χ2 model test that has been usually disregarded in applications of latent variable modeling, whereas the latter proposing indices for assessing model-data fit when the analysis is based on pairwise maximum likelihood (PML), which was believed to perform better than WLS for factor analyzing discrete ordinal response data. The pairwise likelihood method belongs to the broad class of pseudo-likelihoods (Besag, 1975; Cox and Reid, 2004), which replaces the likelihood by a function that is easier to evaluate and consequently easier to maximize.

De Bondt and Van Petegem, Guo et al., on the other hand, demonstrated two case studies of applications of SEM to real data problems where the structural validity of questionnaires that measure personality and sleep quality, respectively, was investigated. It is worth noting that instead of using the frequentiest approach, De Bondt and Van Petegem adopted a Bayesian SEM (Lee, 2007; Kaplan and Depaoli, 2012; Muthén and Asparouhov, 2012) method to their problem and found that with informative priors for cross loadings and residual variances the model-data fit was adequate, which was not achieved using the maximum likelihood SEM. Bayesian SEM is an innovative and flexible approach to latent variable modeling, and this paper demonstrates its applications and further the advantages of Bayesian over frequentist in fitting latent variable models via the use of SEM.

Finally, the three articles by Jiang et al., Kuo and Sheng, and Park et al. all involved evaluating parameter estimation in IRT, with the first two focusing on multidimensional graded response models (GRM; Samejima, 1969)—applicable for Likert items with ordered categories, and the latter focusing on an important assumption for conventional dichotomous IRT models. All three articles evaluated performances of the respective model(s) using Monte Carlo simulations and consequently provided a set of guidelines under various test situations that were manipulated by the studies. Specifically, Jiang et al. assessed the performance of the marginal maximum likelihood method (MML) paired with the EM algorithm in estimating the multidimensional GRM [a straightforward extension of the multidimensional IRT (Reckase, 2009) model] under different sample size, test length and intertrait correlation conditions. Kuo and Sheng focused on a special case of the multidimensional GRM, namely, the multi-unidimensional GRM, and compared a number of different MML and fully Bayesian (via the use of MCMC techniques) methods in estimating the model under different test conditions. Park et al., on the other hand, illustrated the effect of item parameter drift on estimating unidimentional IRT models (via the use of mixture models), calling for the importance of checking the invariance assumption before fitting the model.

With current enhanced computational technology and advanced statistical estimation methods, complex modeling of test data can be realized via the use of e.g., PML, WLS, or MCMC under the framework of SEM or IRT. However, each method or modeling has its own set of limitations. Consequently, it is essential that one understands the conditions under which the model fits adequately to ensure that little error and bias are involved in estimating model parameters. When applying psychometric models to real data problems, one needs to keep in mind that all (simple or complex) models are approximations to reality. It is hence important that we evaluate various aspects of model-data fit in order to decide on the best available model, which may not necessarily be the most complicated one. The seven articles in this research topic exemplify evaluating the fit of complex psychometric models empirically or theoretically, which will serve as a good reference for future research on developments and applications of psychometric models. It is hoped that more studies are conducted in this area to further advance test theory.

Author Contributions

The author confirms being the sole contributor of this work and approved it for publication.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Besag, J. E. (1975). Statistical analysis of non lattice data. Statistician 24, 179–195. doi: 10.2307/2987782

CrossRef Full Text | Google Scholar

Cox, D. R., and Reid, N. (2004). A note on pseudolikelihood constructed from marginal densities. Biometrika 91, 729–737. doi: 10.1093/biomet/91.3.729

CrossRef Full Text | Google Scholar

Crocker, L., and Algina, J. (1986). Introduction to Classical and Modern Test Theory. Belmont, CA: Wordsworth.

Google Scholar

Kaplan, D., and Depaoli, S. (2012). “Bayesian structural equation modeling,” in Handbook of Structural Equation Modeling, ed H. H. Rick (New York, NY: Guilford), 650–673.

Lee, S.-Y. (2007). Structure Equation Modeling: A Bayesian Approach. New York: Wiley.

Google Scholar

Muthén, B., and Asparouhov, T. (2012). Bayesian structural equation modeling: a more flexible representation of substantive theory. Psychol. Methods, 17, 313–335. doi: 10.1037/a0026802

PubMed Abstract | CrossRef Full Text | Google Scholar

Reckase, M. D. (2009). Multidimensional Item Response Theory. New York, NY: Springer.

Google Scholar

Samejima, F. (1969). Estimation of Latent Ability Using a Response Pattern of Graded Scores. Psychometric Monograph No. 17. Richmond, VA: Psychometric Society.

Google Scholar

Thorndike, E. L. (1940). An Introduction to the Theory of Mental and Social Measurements. New York, NY: Science Press.

Keywords: test theory, psychometric modeling, structural equation modeling, item response theory, model-data fit, parameter estimation

Citation: Sheng Y (2017) Editorial: Fitting Psychometric Models: Issues and New Developments. Front. Psychol. 8:856. doi: 10.3389/fpsyg.2017.00856

Received: 10 February 2017; Accepted: 09 May 2017;
Published: 29 May 2017.

Edited by:

Jason C. Immekus, University of Louisville, United States

Reviewed by:

Jason C. Immekus, University of Louisville, United States
Mark D. Reckase, Michigan State University, United States
Wolfgang Rauch, Heidelberg University, Germany
Daniel Bolt, University of Wisconsin-Madison, United States

Copyright © 2017 Sheng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Yanyan Sheng, ysheng@siu.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.