About this Research Topic
In the last decade a growing number of researchers have become interested in applying new tools to establish the equivalence of measurements in comparative political science using mass surveys. The increasingly available cross-national datasets offer tremendous possibilities for comparative survey analysis, including cross-sectional comparative analyses, analysis of cross-national repeated cross-sections, and analysis of cross-national panels. Many of these datasets also contain information about contextual attributes of the different countries and important economic, social or political information (such as GNP, social spending, migration flow data or religious composition) that facilitates multi-level analyses. A similar comparative logic can be applied to a lower level of aggregation as well, for example when regions or even smaller units within countries are compared.
In all these types of comparative analysis using different kinds of data, comparability of the measurements is a necessary condition to obtain valid results. There is a steadily growing literature on measurement equivalence, specifying the statistical prerequisites for comparing unbiased covariances, regression coefficients, and latent means. An increasing number of empirical studies use Multiple Group Confirmatory Factor Analysis (MGCFA) to assess measurement invariance, thereby distinguishing between the levels of configural, metric and scalar invariance. This rather technical literature – that often focuses on statistical details and pays less attention to theoretical validity – has more recently been complemented by new approaches investigating how respondents interpret particular items, for example by probing questions concerning the content of the items. This approach has been expanded in recent years by implementing the probing technique in web surveys (web probing), which results in much larger sample sizes compared to traditional face-to-face cognitive interviewing.
Since establishing the necessity of testing for measurement invariance, confirmatory factor analysis with multiple groups (MGCFA) has been the most widely used technical tool to evaluate (full or at least partial) measurement invariance for continuous variables. In the case of ordered-categorical items with few categories and a high degree of skewness, the ordinal approach to MGCFA is more appropriate. More recently, these tests of exact equivalence have been criticized for being too restrictive, leading to the conclusion that comparisons should not be made even when cross-cultural differences are negligible. To tackle this criticism, more liberal approaches have been developed for continuous variables – called approximate invariance – that allow comparisons of many groups and countries which would not be possible with the traditional approaches. In the case of dichotomous items, Item Response Models (IRT) have predominantly been used for this purpose. Additionally, Exploratory and Confirmatory Latent Class Analyses for multiple groups have been applied for the purpose of testing measurement equivalence. A recent development is the use of multilevel regression models and structural equation multilevel models by combining individual level data and higher order level data, to explain why there is no metric or scalar invariance. All these procedures are grounded in the latent variable approach, and make specific assumptions concerning the direction of relationships between latent variables and items. The models just mentioned assume reflective indicators, that is, indicators conceptualized as consequences (reflections) of the underlying latent variable. While this is an appropriate assumption in many cases, the literature also shows examples of formative constructs (assuming that items determine the latent variable). This issue of reflective vs. formative indicators has been a point of critical discussion among political scientists and will be addressed in this Research Topic. We would also like to welcome papers which not only address the added value of new methods but also the "disruptive innovation" this causes for the field and its established views.
The aim of this Research Topic is two-fold:
1) To inform political scientists about the state-of-the-art in this very fast developing branch of survey methodology and statistics, where a lot of basic research has been done outside political science (e.g. in the fields of psychometrics and statistics).
2) Present studies applying the different techniques to central political science concepts such as values, attitudes, trust, nationalism, party identification, protest behavior, and populism.
This Research Topic welcomes:
- Methodological papers that both summarize the state-of-the-art in classical and approximate measurement invariance (such as alignment and BSEM) and evaluate its applications to political science.
- Contributions on invariance testing with exploratory and confirmatory latent class analysis, the three-step approach, and the strengths and weaknesses of IRT and its applications.
- Discussions of measurement in the context of multilevel regression, multilevel structural equation models, and multilevel CFA.
- Discussing the state-of-the-art in web probing and how to conduct robustness studies in the presence of measurement inequivalence for both continuous and ordinal variables.
- Papers that may address what these findings mean for Qualitative and Mixed-Methods research also. Whether qualitative research face's similar problems, and if it would benefit from similar solutions.
- Due to the prevalent gap between theory and practice, we welcome papers which directly address and propose ways to close, narrow or alleviate this disparity for their specific research area.
- Lastly, this Research Topic is also open to manuscripts addressing pedagogical considerations in relation to methods research as well as their consequences and/or implications on curricular.
Keywords: comparative analysis, measurement invariance, approximate invariance, multilevel analysis, online probing
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.