<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Educ.</journal-id>
<journal-title>Frontiers in Education</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Educ.</abbrev-journal-title>
<issn pub-type="epub">2504-284X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">634528</article-id>
<article-id pub-id-type="doi">10.3389/feduc.2021.634528</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Education</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Multilevel Latent Transition Mixture Modeling: Variance Decomposition and Application</article-title>
<alt-title alt-title-type="left-running-head">Morgan and Padgett</alt-title>
<alt-title alt-title-type="right-running-head">Multilevel Latent Transition Mixture Modeling</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Morgan</surname>
<given-names>Grant B.</given-names>
</name>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/712812/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Padgett</surname>
<given-names>R. Noah</given-names>
</name>
<uri xlink:href="https://loop.frontiersin.org/people/1155075/overview"/>
</contrib>
</contrib-group>
<aff>Department of Educational Psychology, Baylor University, <addr-line>Waco</addr-line>, <addr-line>TX</addr-line>, <country>United&#x20;States</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/522736/overview">Katerina M. Marcoulides</ext-link>, University of Minnesota Twin Cities, United&#x20;States</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1166337/overview">Ryan Grimm</ext-link>, SRI International, United&#x20;States</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1306447/overview">Jason Rights</ext-link>, University of British Columbia, Canada</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Grant B. Morgan, <email>grant_morgan@baylor.edu</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Assessment, Testing and Applied Measurement, a section of the journal Frontiers in Education</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>05</day>
<month>08</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>6</volume>
<elocation-id>634528</elocation-id>
<history>
<date date-type="received">
<day>27</day>
<month>11</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>07</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2021 Morgan and Padgett.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Morgan and Padgett</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these&#x20;terms.</p>
</license>
</permissions>
<abstract>
<p>Person-centered methodologies generally refer to those that take unobserved heterogeneity of populations into account. The use of person-centered methodologies has proliferated, which is likely due to a number of factors, such as methodological advances coupled with increased personal computing power and ease of software use. Using latent class analysis and its extension for longitudinal data, [latent transition analysis (LTA)], multiple underlying, homogeneous subgroups can be inferred from a set of categorical and/or continuous observed variables within a large heterogeneous data set. Such analyses allow researchers to statistically treat members of different subgroups separately, which may provide researchers with more power to detect effects of interest and closer alignment between statistical modeling and one&#x2019;s guiding theory. For many educational and psychological settings, the hierarchical structure of organizational data must also be taken into account; for example, students (i.e.,&#x20;level-1 units) are nested within teacher/schools (i.e.,&#x20;level-2 units). Finally, multilevel LTA can be used to estimate the number of latent classes in each structured unit and the potential movement, or transitions, participants make between latent classes across time. The transitions/stability between latent classes across time can be treated as the outcome in and of itself, or the transitions/stability can be used as a correlate or predictor of some other, distal outcome. The purpose of the paper is to discuss multilevel LTA, provide considerations for its use, and demonstrate variance decomposition, which requires numerous steps. The variance decomposition steps are presented didactically along with a worked example based on analysis from the Social Rating Scale of ECLS-K.</p>
</abstract>
<kwd-group>
<kwd>multilevel</kwd>
<kwd>latent transition</kwd>
<kwd>mixture</kwd>
<kwd>education</kwd>
<kwd>ECLS-K</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Efforts to classify individual cases into homogeneous groups have long been used in order to better understand complex sets of information. Classification of cases into homogeneous groups has important implications in the social sciences, such as education, medicine, psychology, or economics, where identifying smaller subsets of like cases may be of particular interest. Person-centered methodologies generally refer to those that take unobserved heterogeneity of populations into account. That is, rather than treat all individuals as if they originated from a single underlying population, as is true with variable-centered methodologies, person-centered methodologies allow for multiple subpopulations to underlie a set of data. The challenge with these methods is identifying the correct number (i.e.,&#x20;frequency) of subpopulations, or classes, and the parameters (i.e.,&#x20;form) associated with each, when the frequency and form are not known <italic>a priori</italic> (<xref ref-type="bibr" rid="B20">Nylund et&#x20;al., 2007</xref>; <xref ref-type="bibr" rid="B23">Tofighi and Enders, 2008</xref>; <xref ref-type="bibr" rid="B17">Morgan, 2015</xref>).</p>
<p>Mixture modeling, generally, refers to the family of statistical procedures for identifying homogeneous subpopulations of cases from one large, heterogeneous data set (<xref ref-type="bibr" rid="B15">McLachlan and Peel, 2004</xref>; <xref ref-type="bibr" rid="B7">Collins and Lanza, 2009</xref>). The analysis assumes that an observed dataset is a mixture of observations collected from a finite number of mutually exclusive classes, each with its own characteristics. These procedures have been referred to in the literature under many different names, such as mixture likelihood approach to clustering (<xref ref-type="bibr" rid="B14">McLachlan and Basford, 1988</xref>; <xref ref-type="bibr" rid="B8">Everitt, 1993</xref>) and model-based clustering (<xref ref-type="bibr" rid="B4">Banfield and Raftery, 1993</xref>). Depending on the metric level of the variables included in the study, other terms used to describe this methodology are latent class analysis, latent profiles analysis, latent class clustering, or model-based clustering.</p>
<p>Many advances have occurred in mixture modeling as an analytic methodology, which now includes models like factor mixture, growth mixture, diagnostic classification, and latent Markov models. Moreover, mixture modeling is now being applied in fields ranging from education to brain imaging and geosciences to robotics. Despite the proliferation of models and applications that fall within the mixture modeling framework, there are still new areas and angles to explore and better understand in order to more fully realize the strengths of this analytic framework. One such area where limited research has been disseminated involves nested data structures that are collected longitudinally. That is, multilevel mixture models are available to researchers although they have not been discussed as extensively as other cross-sectional and longitudinal mixture models. <xref ref-type="bibr" rid="B1">Asparouhov and Muth&#xe9;n (2008)</xref> and <xref ref-type="bibr" rid="B11">Kaplan et&#x20;al. (2011)</xref> presented findings from applications of this type of model, but additional guidance on the use of these models may help users better understand their data structures and, ultimately, make better decisions about their research questions. One important consideration when using these models is the ability of the researcher to understand the magnitude and sources of effects through the decomposition of variance. This is especially true in models, such as the ones we present in the next section, that have nested data structure collected across&#x20;time.</p>
<p>This is precisely the purpose of this paper. That is, we seek to 1) present and discuss multilevel latent transition analysis, 2) describe considerations for the use of this model, and 3) demonstrate a multi-step variance decomposition. The variance decomposition steps are presented didactically along with a worked example based on an analysis from the Social Rating Scale (SRS) of Early Childhood Longitudinal Survey - Kindergarten (ECLS-K). Several additional notes are important due to the didactic nature of this paper. First, the latent class analysis and latent profile analysis differ on the basis of the metric level of the indicator variables, yet these are conceptually similar analyses. Latent categorical variables are often referred to as latent classes regardless of the metric level of the indicator variables. As such, there are instances where we use &#x201c;class&#x201d; and &#x201c;profile&#x201d; interchangeably. For this paper, we are modeling continuous indicators so the term &#x201c;latent profile&#x201d; is most precise, but the discussion and procedures we present apply to model with categorical and/or continuous indicators. Second, we demonstrate the procedures for variance decomposition with a two-class model for didactic reasons; therefore, any substantive conclusions about the specific variables or participants used in the example should be avoided. Third, we used the Grades 3 and 5 SRS scores from restricted-use ECLS-K 1998 datafile; the scores for Grades 3 and 5 were respectively collected in Spring 2002 and Spring&#x20;2004.</p>
</sec>
<sec id="s2">
<title>2 Introduction to Latent Transition Analysis</title>
<p>When using latent class analysis and its extension for longitudinal data, [latent transition analysis (LTA)], multiple underlying, homogeneous subgroups can be inferred from a set of categorical and/or continuous observed variables within a large heterogeneous data set. Such analyses allow researchers to statistically treat members of different subgroups separately, which may provide researchers with more power to detect effects of interest and closer alignment between statistical modeling and one&#x2019;s guiding theory.</p>
<p>In latent class analysis (LCA), membership in one of the underlying populations is conceptualized as a latent, categorical variable that is not directly observed. Instead, latent class membership must be measured using two or more observed, or indicator, variables, taken as a manifestation of latent variables. The number of latent profiles underlying a dataset is not known a priori, and thus, has to be uncovered (<xref ref-type="bibr" rid="B7">Collins and Lanza, 2009</xref>). The process typically involves fitting models that specify different numbers of profiles in order to determine which model best approximates the heterogeneous set of data. Each case is assigned a probability of belonging to each profile based on the alignment between the characteristics (e.g., response probabilities, means, variances, covariances) between each case and each profile. When the characteristics of a case are similar to those of a given profile, the case has a high probability of being a member of the subpopulation. When the characteristics of a case are dissimilar to those for a stated profile, the case has a low probability of belonging to the profile. Generally, cases are assigned to the profile to which they have the highest probability of belonging, which is called modal assignment (<xref ref-type="bibr" rid="B7">Collins and Lanza, 2009</xref>). Ideally, the classification probability for each person will be high for one and only one profile. An optimal solution will have high classification probabilities for each latent class, illustrating that the classes are distinct.</p>
<p>The procedures described above can be applied to cross-sectional data or data collected at multiple points in time. LTA, the longitudinal extension of LCA, allows the stability of an LCA solution to be examined across time. Furthermore, LTA allows researchers to examine transition patterns among latent classes across time using one of several strategies. The first strategy is to regress latent class membership at time <inline-formula id="inf1">
<mml:math id="m1">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> on latent class membership at time <italic>t</italic>, which is analogous to a multinomial regression. When three or more Waves of data collection are completed, this strategy can be done with or without higher-order effects, which enables a researcher to explore the lasting direct effects of latent profile membership on later profile membership through an autoregressive model (<xref ref-type="bibr" rid="B20">Nylund et&#x20;al., 2007</xref>). However, in this paper we restricted our investigation to two timepoints for didactic purposes and only a one lag autoregression structure is possible. A second strategy is to include a second-order latent class variable that identifies participants who are most likely to switch latent classes (i.e.,&#x20;movers) or remain in the same class (i.e.,&#x20;stayers) across time. Such models have been referred to as a mover-stayer LTA model. The mover-stayer model is an extension of the Markov chain model and special case of the mixed Markov model. Interested readers should see <xref ref-type="bibr" rid="B5">Blumen et&#x20;al. (1955)</xref> and <xref ref-type="bibr" rid="B9">Goodman (1961)</xref> for a thorough presentation of the mover-stayer model and <xref ref-type="bibr" rid="B27">Vermunt (2004)</xref> for a great summary of the model. The mover-stayer model and its variants could be considered when certain types of transition are of interested, such as first marriage or death, where transition back to a previous state is not possible or when transition is believed to occur by a random process <xref ref-type="bibr" rid="B27">Vermunt (2004)</xref>. The mover-stayer model can be more parsimonious but its selection should ultimately be aligned with one&#x2019;s guiding theoretical expectation and associated research questions.</p>
<p>The modeling strategy chosen has important implications on the structure of the latent transition matrix, which contains probabilities of transitioning to another latent class conditioned on latent class membership at 1) time 1 if only two waves of data collection occurred or 2) <inline-formula id="inf2">
<mml:math id="m2">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> if used with more than two waves of data collection. In the former option, the transition matrix is unstructured, which allows any transition pattern to take place. In the latter option, the diagonal of the transition matrix is constrained to 1.0 among the stayers, which assigns those participants classified as stayers a transition probability of zero of switching to another profile (<xref ref-type="bibr" rid="B17">Morgan, 2015</xref>).</p>
<sec id="s2-1">
<title>2.1 Multilevel Latent Transition Analysis for Longitudinal Nested Data</title>
<p>Although LTA accounts for the collection of data from the same individuals across time (i.e.,&#x20;time nested within person), the model can also be extended to account for individuals being nested within higher level units, such as schools, hospitals, organizations, etc. In education research, statistical methods are commonly used that model students nested within schools, which is the context for the illustration in this paper. In such cases, the hierarchical structure of organizational data must be taken into account because independence between observations is not tenable; that is, students (i.e.,&#x20;level-1 units) are nested within and share influence of schools (i.e.,&#x20;level-2 units). Thus, multilevel LTA can be used to estimate the number of latent classes in each structured unit and the transitions participants make between classes across time. Finally, the transitions/stability between latent classes across time can be treated as the outcome in and of itself, or the transitions/stability can be used as a correlate or predictor of some other, distal outcome.</p>
<p>The multilevel LTA can be expressed as a series of multinomial logistic regressions at level-1 and as a linear regression at level-2 (<xref ref-type="bibr" rid="B1">Asparouhov and Muth&#xe9;n, 2008</xref>). To illustrate, consider the model below that has two latent classes across two time points. At level-1 the multinomial logistic regression for the latent classification variable at time 1, <inline-formula id="inf3">
<mml:math id="m3">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, can be expressed as<disp-formula id="e1">
<mml:math id="m4">
<mml:mrow>
<mml:mi mathvariant="normal">P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(1)</label>
</disp-formula>where <inline-formula id="inf4">
<mml:math id="m5">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> represents the latent class at time 1 for individual <italic>i</italic> in group <italic>g</italic>, and <inline-formula id="inf5">
<mml:math id="m6">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the intercept of latent class 1 for group <italic>g</italic> and is assumed to be normally distributed. The intercept for latent class 2 at time 1 is set to zero for identification because only one intercept is needed to distinguish two latent classes. The multinomial logistic regression of latent class at time 2 (<inline-formula id="inf6">
<mml:math id="m7">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>) on latent class at time 1 (<inline-formula id="inf7">
<mml:math id="m8">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>) can be expressed as<disp-formula id="e2">
<mml:math id="m9">
<mml:mrow>
<mml:mi mathvariant="normal">P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b3;</mml:mi>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b3;</mml:mi>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(2)</label>
</disp-formula>where <inline-formula id="inf8">
<mml:math id="m10">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> represents the latent class intercept at time 2 for individual <italic>i</italic> in group <italic>g</italic>, and <italic>&#x3b3;</italic> represents the expected change in logits from the multinomial logistic regression predicting latent class at time 2 from latent class at time 1. The indicator function (<inline-formula id="inf9">
<mml:math id="m11">
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>) in <xref ref-type="disp-formula" rid="e2">Eq. 2</xref> demonstrates how the latent regression parameter <italic>&#x3b3;</italic> is specific to class 1 which is how class specific transition probabilities are captured in the model (<xref ref-type="bibr" rid="B1">Asparouhov and Muth&#xe9;n, 2008</xref>; <xref ref-type="bibr" rid="B11">Kaplan et&#x20;al., 2011</xref>). That is, the indicator function takes on values <inline-formula id="inf10">
<mml:math id="m12">
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mn>0,1</mml:mn>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> depending on whether or not latent class membership at time 1 was equal to 1. Like the latent class intercept for class 2 at time 1, the latent class intercept for class 2 at time 2 is set to zero for identification. It should be noted the number of regression weights, <italic>&#x3b3;</italic>, increases as the number of latent classes increases. For example, two latent classes implies one <italic>&#x3b3;</italic> whereas three classes implies up to four <italic>&#x3b3;</italic>s. Additionally, the model above can be extended to incorporate student level covariates into the transitional structure (<xref ref-type="bibr" rid="B26">Vermunt et&#x20;al., 1999</xref>).</p>
<p>The multinomial autoregression model above is akin to what is used in single-level LTA; however, a unique contribution of multilevel LTA is the incorporation of a latent regression model of latent class intercepts over time. At level-2, the random effects of latent class size across schools can be explained as part of a series of latent linear regression models such as<disp-formula id="e3">
<mml:math id="m13">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(3)</label>
</disp-formula>
<disp-formula id="e4">
<mml:math id="m14">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b2;</mml:mi>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(4)</label>
</disp-formula>The regression in <xref ref-type="disp-formula" rid="e3">Eq. 3</xref> models the difference in latent class size across schools at time 1 where <inline-formula id="inf11">
<mml:math id="m15">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the latent class size in logits for school <italic>g</italic> at time 1, <inline-formula id="inf12">
<mml:math id="m16">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the average latent class size at time 1, and <inline-formula id="inf13">
<mml:math id="m17">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is random effect of latent class size across schools. Similarly, the regression in <xref ref-type="disp-formula" rid="e4">Eq. 4</xref> models the differences in latent class size across schools at time 2, where <inline-formula id="inf14">
<mml:math id="m18">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the latent class size in logits for school <italic>g</italic> at time 2, <inline-formula id="inf15">
<mml:math id="m19">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the average latent class size at time 2 unconditional on latent class size at time 1, <italic>&#x3b2;</italic> is the fixed effect of latent class size at time 1 on time 2 latent class size, and <inline-formula id="inf16">
<mml:math id="m20">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the random effect of latent class size across schools unique to time 2. The random effects are commonly assumed to be normally distributed with unique variance estimates at each timepoint [e.g., <inline-formula id="inf17">
<mml:math id="m21">
<mml:mrow>
<mml:mtext>var</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf18">
<mml:math id="m22">
<mml:mrow>
<mml:mtext>var</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>]. The level-2 latent regression of the multilevel LTA model expresses how latent class sizes change over time among schools. Factors that influence differences in latent class size over time among schools can be studied in more detail if level-2 covariates are included in the model. The incorporation of covariates can be guided by substantive interest and by information about how much information can be accounted for by these covariates. The amount of information that is contained in the level-2 portion of the model can be expressed by different <inline-formula id="inf19">
<mml:math id="m23">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>-like measure that can be computed.</p>
<p>The multilevel LTA model has many parameters to describe the process which generated the differences in observed characteristics across time, and some of the model features are directly interpretable whereas other features are less easily interpreted. In order to help explain the complex features of the model, various <inline-formula id="inf20">
<mml:math id="m24">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>-like measures can be computed to provide information about how the variability in latent class membership is influenced by 1) time, 2) nested data structure, and/or 3) individual latent class membership. For example, <xref ref-type="bibr" rid="B1">Asparouhov and Muth&#xe9;n (2008)</xref> and <xref ref-type="bibr" rid="B11">Kaplan et&#x20;al. (2011)</xref> explicitly described an <inline-formula id="inf21">
<mml:math id="m25">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> measure for the proportion of variance in latent class membership at time 2 that is accounted for by latent class membership at time 1. And is readily available for use in single-level LTA as well. This <inline-formula id="inf22">
<mml:math id="m26">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> measure is<disp-formula id="e5">
<mml:math id="m27">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3b3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3b3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(5)</label>
</disp-formula>where <inline-formula id="inf23">
<mml:math id="m28">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the probability of being latent class 1 at time 1 and <inline-formula id="inf24">
<mml:math id="m29">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the residual variance associated with the logistic regression performed at level-1. <inline-formula id="inf25">
<mml:math id="m30">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> may also be viewed as the relative size of latent class 1 at time 1. Although the result in <xref ref-type="disp-formula" rid="e5">Eq. 5</xref> is useful, there is more information in a multilevel LTA model that can be used to gain additional insights into the process under investigation.</p>
<p>
<xref ref-type="bibr" rid="B1">Asparouhov and Muth&#xe9;n (2008)</xref> used these other <inline-formula id="inf26">
<mml:math id="m31">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>-like measures, such as the proportion of variance in <inline-formula id="inf27">
<mml:math id="m32">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> explained by <inline-formula id="inf28">
<mml:math id="m33">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, the proportion of variance in <inline-formula id="inf29">
<mml:math id="m34">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> explained by the group effect at time 1, among others, but they did not describe the steps necessary to calculate these values. A detailed explanation on how to obtain such <inline-formula id="inf30">
<mml:math id="m35">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>-like measures is therefore one of the major contributions of this&#x20;work.</p>
<p>One potential limitation of the <inline-formula id="inf31">
<mml:math id="m36">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> measure in <xref ref-type="disp-formula" rid="e5">Eq. 5</xref> is that the hierarchical structure of the data is ignored, which means that it may overestimate the effect latent class membership at time 1 has on latent class membership at time 2. The methods we demonstrate for decomposing the variance in multilevel LTA explicitly account for this feature of the data. That is, the variance decomposition we describe accounts for the nested data structure by incorporating all model components into the variability in latent class membership at time&#x20;2.</p>
</sec>
<sec id="s2-2">
<title>2.2 Considerations for Using Multilevel Latent Transition Analysis</title>
<p>There are several considerations specific to multilevel LTA that extend beyond those associated with LCA and LTA. First, one must consider whether the research question posed requires the multilevel aspect of the data explicitly incorporated into a mutlilevel LTA model. Not all questions require that the nested nature of the data be explicitly modeled (<xref ref-type="bibr" rid="B16">McNeish et&#x20;al., 2017</xref>). For example, a researcher primarily interested in transitions of students among latent classes over time may not need to explicitly account for a school effect if differences among schools does not influence the students&#x2019; transitions. Instead, the multilevel aspect of the data can be incorporated implicitly through the use of sampling weights (<xref ref-type="bibr" rid="B22">Stapleton, 2013</xref>) or alternatives such as cluster-robust standard errors (<xref ref-type="bibr" rid="B16">McNeish et&#x20;al., 2017</xref>). However, the use of multilevel LTA is likely warranted when researchers believe that characteristics of the group or school are related to differences in latent class membership. This is commonly encountered in education and healthcare applications where between-school and between-hospital differences, respectively, influence large groups of participants simultaneously.</p>
<p>In additional to the nested feature of one&#xb4;s data, another important consideration is the time scale in which data were collected. The time scale of data collection may, or may not, adhere time scale of the transitions that individual may experience. Collins and Lanza (2009, <italic>p</italic>. 209&#x2013;211) expressed how the transition structure estimated may reveal only chance transitions due to a underlying structure that transitions very rapidly (e.g., the example of indicators of depression in the last week but data were collected one year apart). Therefore, researchers must think carefully about how observed transitions among latent classes are related to transitions in the underlying construct of interest. In multilevel LTA, in particular, an additional consideration is whether the time scale of the transition is equal across level-2 units, such as schools. In healthcare settings, for example, the time scale of transitioning among depression latent classes may depend in part on the care received across different clinics if clinics were to have a general approach to helping patients with, say, depressive symptoms. As noted above, these considerations should be applied in addition to those important considerations that have been identified for LCA and LTA, such as model selection (<xref ref-type="bibr" rid="B20">Nylund et&#x20;al., 2007</xref>; <xref ref-type="bibr" rid="B23">Tofighi and Enders, 2008</xref>; <xref ref-type="bibr" rid="B17">Morgan, 2015</xref>), label switching (<xref ref-type="bibr" rid="B6">Chung et&#x20;al., 2004</xref>; <xref ref-type="bibr" rid="B25">Tueller et&#x20;al., 2011</xref>), nature of the latent variables (<xref ref-type="bibr" rid="B13">Lubke and Neale, 2008</xref>), and incorporation of distal outcome (<xref ref-type="bibr" rid="B12">Lanza et&#x20;al., 2013</xref>; <xref ref-type="bibr" rid="B3">Bakk and Vermunt, 2016</xref>; <xref ref-type="bibr" rid="B21">Nylund-Gibson et&#x20;al., 2019</xref>). An excellent collection of applied and methodological papers using these procedures can be found on the Mplus website (<ext-link ext-link-type="uri" xlink:href="http://www.statmodel.com/paper.shtml">www.statmodel.com/paper.shtml</ext-link>).</p>
<p>Next, we illustrate the use of multilevel LTA and explicitly model the multilevel nature of the&#x20;data.</p>
</sec>
</sec>
<sec id="s3">
<title>3 Application of Multilevel Latent Transition Analysis</title>
<sec id="s3-1">
<title>3.1 Sample</title>
<p>The data used are a subset of the ECSL-K national dataset (<xref ref-type="bibr" rid="B24">Tourangeau et&#x20;al., 2009</xref>). The analytic sample for this demonstration was approximately 7,080 students nested within approximately 1,100 schools (sample sizes have been rounded to the nearest 10 in compliance with federal restricted-use data reporting guidelines). Prior to estimating the multilevel LTA model, we subset the ECLS-K data file on the students who remained in the same school from at least Grade 3 to Grade 5. The average number of students per school was 6.4 (<inline-formula id="inf32">
<mml:math id="m37">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>5.3</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>) and ranged from 1 to about 30 students.</p>
</sec>
<sec id="s3-2">
<title>3.2 Instrumentation</title>
<p>In order to demonstrate the model output and subsequent decomposition of the model variance, we used the five Social Rating Scale subscales from the Early Childhood Longitudinal Survey&#x2013;Kindergarten (ECLS-K) data. The five major constructs of interests are: Approaches to Learning (AtL), Self-Control (SC), Interpersonal Skills (IPS), Externalizing Problem Behaviors (EPB), and Internalizing Problem Behaviors (IPB) (<xref ref-type="bibr" rid="B24">Tourangeau et&#x20;al., 2009</xref>). These five constructs of child behaviors/characteristics are modeled as being reflective of a child&#x2019;s need for possible additional behavioral intervention. The reliability estimates (coefficient <italic>&#x3b1;</italic>) for these constructs in the full ECLS-K in spring of fifth grade ranged from 0.77 (Internalizing Problem Behaviors) to 0.91 (Approaches to Learning). Reading teachers were asked to report how frequently students exhibited the social skill or behavior identified by each item. The response scale used a four-point frequency scale ranging from 1 (Never) to 4 (Very Often). The same 26 SRS items administered in Grade 3 and 5. A summary of these raw subscale scores is shown in <xref ref-type="table" rid="T1">Table&#x20;1</xref>.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Summary of observed&#x20;data.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Year</th>
<th align="center">AtL</th>
<th align="center">SC</th>
<th align="center">IPS</th>
<th align="center">EPB</th>
<th align="center">IPB</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">Time 1/Grade 3</td>
<td align="char" char=".">3.10 (0.45)</td>
<td align="char" char=".">3.24 (0.35)</td>
<td align="char" char=".">3.10 (0.40)</td>
<td align="char" char=".">1.63 (0.32)</td>
<td align="char" char=".">1.62 (0.28)</td>
</tr>
<tr>
<td align="left">Time 2/Grade 5</td>
<td align="char" char=".">3.10 (0.45)</td>
<td align="char" char=".">3.23 (0.36)</td>
<td align="char" char=".">3.12 (0.42)</td>
<td align="char" char=".">1.66 (0.35)</td>
<td align="char" char=".">1.60 (0.26)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>
<italic>Note.</italic> N &#x3d; 7,080, Number of Schools &#x3d; 1,100, Mean (variance). AtL &#x3d; Approaches to Learning; SC &#x3d; Self-control; IPS &#x3d; Interpersonal Skills; EPB &#x3d; Externalizing Problem Behaviors; IPB &#x3d; Internalizing Problem Behaviors.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>The raw subscale scores were computed as the average of the responses to the items on each subscale.</p>
</sec>
<sec id="s3-3">
<title>3.3 Procedures</title>
<p>The model was estimated using maximum likelihood estimator with robust standard errors (MLR) in Mplus v8.4 (L. <xref ref-type="bibr" rid="B19">Muth&#xe9;n and Muth&#xe9;n, 2017</xref>) using 2,000 random starting values and 50 final stage optimizations. For illustrative purposes, we estimated only a two-class solution. In practice, additional class enumeration models would be estimated and compared. For this demonstration, we elected to not use sampling weights to reduce the complexity of the example analysis. All inferences from the following model are restricted to this sample of students and is not necessarily a representation of the characteristics of students more broadly.</p>
<p>The path diagram for the multilevel LTA model is presented in <xref ref-type="fig" rid="F1">Figure&#x20;1</xref>. We should note that the path diagram includes variance components to aid in interpretation of variance decomposition discussion&#x20;below.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Path diagram of a multilevel latent transition model proposed for ECLS-K Social Rating Scale over two waves. Note. Subscripts indicate the timepoint of the data. AtL &#x3d; Approaches to Learning; SC &#x3d; Self-control; IPS &#x3d; Interpersonal Skills; EPB &#x3d; Externalizing Problem Behaviors; IPB &#x3d; Internalizing Problem Behaviors; <inline-formula id="inf33">
<mml:math id="m38">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> &#x3d; random intercept for time t, so <inline-formula id="inf34">
<mml:math id="m39">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the random intercept time 1; <inline-formula id="inf35">
<mml:math id="m40">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> &#x3d; latent class at time t, which takes on values <inline-formula id="inf36">
<mml:math id="m41">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> (For ease of notation, let <inline-formula id="inf37">
<mml:math id="m42">
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> represent latent class at time 1 and let <inline-formula id="inf38">
<mml:math id="m43">
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> be the latent class at time 2); <italic>&#x3b2;</italic> &#x3d; the regression weight associated of random effect at time 1 predicting the random effect at time 2; <italic>&#x3b3;</italic> &#x3d; the change in logits of the latent response tendency variable at time 2 for individuals in class 1 at time 1 (<italic>&#x3b3;</italic> only applied to cases that are in class 1 at time 1 which is captured by using an indicator function <inline-formula id="inf39">
<mml:math id="m44">
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> which is a Bernoulli random variable); the residual variance of the level-1 latent response tendency variable relative to the reference class, <inline-formula id="inf40">
<mml:math id="m45">
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi>t</mml:mi>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> is <inline-formula id="inf41">
<mml:math id="m46">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mn>3</mml:mn>
</mml:mfrac>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>3.29</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> which is the variance of the logistic distribution.</p>
</caption>
<graphic xlink:href="feduc-06-634528-g001.tif"/>
</fig>
<p>The major inferential goals are the evaluation of the transition parameters (<inline-formula id="inf42">
<mml:math id="m47">
<mml:mrow>
<mml:mi>&#x3b3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>&#x3b2;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>) and the variability in latent class size across schools (<inline-formula id="inf43">
<mml:math id="m48">
<mml:mrow>
<mml:mtext>var</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mtext>var</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>).</p>
</sec>
<sec id="s3-4">
<title>3.4 Results</title>
<p>The resulting latent class patterns are shown in <xref ref-type="table" rid="T2">Table&#x20;2</xref>. In the estimation, the latent class structure was fixed to be invariant across time. Latent class 1 is characterized by students who had lower ratings on the three positive constructs (i.e.,&#x20;AtL, SC, and IPS) and higher scores on the constructs reflecting problem behaviors (i.e.,&#x20;EPB and IPB). Latent class 2 was characterized as having higher scores on the three positive constructs (i.e.,&#x20;AtL, SC, and IPS) and lower ratings on the problem behavior constructs (i.e.,&#x20;EPB and&#x20;IPB).</p>
<table-wrap id="T2" position="float">
<label>TABLE 2</label>
<caption>
<p>ECLS-K 2-class model of social rating scale measurement&#x20;model.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th rowspan="2" align="left">Item</th>
<th colspan="5" align="center">Means</th>
<th colspan="5" align="center">Variances (Time 1/Time 2)</th>
</tr>
<tr>
<th align="center">AtL</th>
<th align="center">SC</th>
<th align="center">IPS</th>
<th align="center">EPB</th>
<th align="center">IPB</th>
<th align="center">AtL</th>
<th align="center">SC</th>
<th align="center">IPS</th>
<th align="center">EPB</th>
<th align="center">IPB</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">Class 1</td>
<td align="char" char=".">2.44</td>
<td align="char" char=".">2.60</td>
<td align="char" char=".">2.45</td>
<td align="char" char=".">2.19</td>
<td align="char" char=".">1.87</td>
<td align="char" char=".">0.23/0.23</td>
<td align="char" char=".">0.15/0.15</td>
<td align="char" char=".">0.19/0.19</td>
<td align="char" char=".">0.18/0.19</td>
<td align="char" char=".">0.24/0.22</td>
</tr>
<tr>
<td align="left">Class 2</td>
<td align="char" char=".">3.43</td>
<td align="char" char=".">3.56</td>
<td align="char" char=".">3.45</td>
<td align="char" char=".">1.37</td>
<td align="char" char=".">1.47</td>
<td align="char" char=".">0.23/0.23</td>
<td align="char" char=".">0.15/0.15</td>
<td align="char" char=".">0.19/0.19</td>
<td align="char" char=".">0.18/0.19</td>
<td align="char" char=".">0.24/0.22</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>
<italic>Note.</italic> N &#x3d; 7,080. AtL &#x3d; Approaches to Learning; SC &#x3d; Self-control; IPS &#x3d; Interpersonal Skills; EPB &#x3d; Externalizing Problem Behaviors; IPB &#x3d; Internalizing Problem Behaviors.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>The structural model parameters are described in <xref ref-type="table" rid="T3">Table&#x20;3</xref>. At Time 1, Class 1 was the smaller of the two latent classes, making up about 32% of the sample, whereas Class 2 made up about 68% of the sample. Due to the multilevel nature of the data, the parameter estimate, <inline-formula id="inf44">
<mml:math id="m49">
<mml:mrow>
<mml:mtext>var</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.64</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, offers additional insights into the latent class structure at time 1. That is, the estimate of 0.64 suggests the proportion of students in Class 1 and Class 2 at time 1 varies depending on the school. In other words, Class 1 contains about 32% of the students at time 1, on average, but this percentage differs across schools with a 95% probable range of 9&#x2013;69%. The larger the variance estimate, the greater the school effect and greater range of relative class sizes across schools.</p>
<table-wrap id="T3" position="float">
<label>TABLE 3</label>
<caption>
<p>ECLS-K 2-class multilevel LTA model of social rating scale results and interpretation.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Parameter</th>
<th align="center">Estimate (SE)</th>
<th align="center">Interpretation</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<inline-formula id="inf45">
<mml:math id="m50">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">&#x2212;0.76 (0.05)</td>
<td align="left">The average logit that separates latent class size among schools. Converting to probability scale, <inline-formula id="inf46">
<mml:math id="m51">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mtext>exp</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:mtext>exp</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.32</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, means that for an average school, individuals have a 0.32 probability of being identified as belonging to class 1 at time 1</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf47">
<mml:math id="m52">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">&#x2212;1.91 (0.08)</td>
<td align="left">The average latent class size at time 2 unconditional on latent class size at time 1. This cannot be directly used to obtain the average latent class size at time 2<xref ref-type="table-fn" rid="Tfn1">
<sup>a</sup>
</xref>. The transition probabilities must be incorporated</td>
</tr>
<tr>
<td align="left">&#x3b3;</td>
<td align="char" char=".">2.93 (0.11)</td>
<td align="left">&#x3b3; is the change in logits from time 1 to time 2 for an <italic>individual</italic> in latent class 1. A large (absolute value) of <italic>&#x3b3;</italic> indicates that the relative size of latent classes is likely to change over from time 1 to time 2</td>
</tr>
<tr>
<td align="left">&#x3b2;</td>
<td align="char" char=".">&#x2212;0.19 (0.11)</td>
<td align="left">The change in logits from time 1 to time 2 for a <italic>school</italic> in latent class 1. The larger (in absolute value) of <italic>&#x3b2;</italic> indicates that the relative size of the latent classes is influential in determining relative size of classes over time</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf48">
<mml:math id="m53">
<mml:mrow>
<mml:mtext>var</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">0.64 (0.09)</td>
<td align="left">The school effect on the relative size of each latent class among school at time 1. Using <inline-formula id="inf49">
<mml:math id="m54">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>0.76</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf50">
<mml:math id="m55">
<mml:mrow>
<mml:mtext>var</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.64</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, a 95% plausible range for the proportion of students in class 1 across school is <inline-formula id="inf51">
<mml:math id="m56">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>0.09</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>0.69</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf52">
<mml:math id="m57">
<mml:mrow>
<mml:mtext>var</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">0.97 (0.13)</td>
<td align="left">The variability in relative class size among schools at time 2 that is unexplained by school differences at time 1</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>
<italic>Note.</italic> Model fit information <inline-formula id="inf53">
<mml:math id="m58">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>36</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf54">
<mml:math id="m59">
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>L</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>49142</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf55">
<mml:math id="m60">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>98338</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf56">
<mml:math id="m61">
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>98516</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, Entropy <inline-formula id="inf57">
<mml:math id="m62">
<mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.887</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</fn>
<fn id="Tfn1">
<label>a</label>
<p>Latent class proportion/size at Time 1 and Time 2 are typically provided as output in the analysis so there is no need to hand compute these statistics.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>The transition component of the multilevel LTA model is characterized by the parameters <italic>&#x3b3;</italic> (<inline-formula id="inf58">
<mml:math id="m63">
<mml:mrow>
<mml:mi>&#x3b3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2.93</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>S</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.11</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:mn>0.001</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>) and <italic>&#x3b2;</italic> (<inline-formula id="inf59">
<mml:math id="m64">
<mml:mrow>
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>0.19</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>S</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.11</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.077</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>). From these two parameters, a transition matrix (<inline-formula id="inf60">
<mml:math id="m65">
<mml:mi mathvariant="bold-italic">&#x3c4;</mml:mi>
</mml:math>
</inline-formula>) is constructed to help explain the overall effect of time. The details of computing these values are given in the Multilevel LTA Variance Decomposition section; but for now, these results are reported in <xref ref-type="table" rid="T4">Table&#x20;4</xref> along with the interpretation. We found that the, on average, about 13% of students who were classified in Class 2 at Time 1 (i.e.,&#x20;third grade) transitioned into the Class 1 at Time 2 (i.e.,&#x20;fifth grade). Of those students classified in the Class 1 at Time 1, approximately 26% transitioned into Class&#x20;2.</p>
<table-wrap id="T4" position="float">
<label>TABLE 4</label>
<caption>
<p>Transition structure interpretation for an 2-class multilevel LTA&#x20;model.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Parameter</th>
<th colspan="2" align="center">Probability (logit)</th>
<th align="center">Interpretation</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<inline-formula id="inf61">
<mml:math id="m66">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mrow>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td colspan="2" align="center" char=".">0.74 (1.16)</td>
<td align="left">Individuals in class 1 at time 1 have 0.74 probability of being classified as belonging to class 1 at time 2</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf62">
<mml:math id="m67">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mrow>
<mml:mn>12</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td colspan="2" align="center" char=".">0.24 (0)</td>
<td align="left">Individuals in class 1 at time 1 have 0.24 probability of being classified as belonging to class 2 at time 2. Logit is fixed to 0 for identification</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf63">
<mml:math id="m68">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mrow>
<mml:mn>21</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td colspan="2" align="center" char=".">0.13 (&#x2212;1.91)</td>
<td align="left">Individuals in class 2 at time 1 have 0.13 probability of being classified as belonging to class 1 at time 2</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf64">
<mml:math id="m69">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mrow>
<mml:mn>22</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td colspan="2" align="center" char=".">0.87 (0)</td>
<td align="left">Individuals in class 2 at time 1 have 0.87 probability of being classified as belonging to class 2 at time 2. Logit is fixed to 0 for identification</td>
</tr>
<tr>
<td colspan="3" align="left">Transition (<inline-formula id="inf65">
<mml:math id="m70">
<mml:mi>&#x3c4;</mml:mi>
</mml:math>
</inline-formula>) matrix</td>
<td align="left">The diagonal of the <inline-formula id="inf66">
<mml:math id="m71">
<mml:mi mathvariant="bold-italic">&#x3c4;</mml:mi>
</mml:math>
</inline-formula> matrix contains the probability of the being identified in the same latent class at both timepoints. The off diagonal elements are the probabilities of being identified as a different class</td>
</tr>
<tr>
<td align="left">&#x2014;</td>
<td align="center">
<inline-formula id="inf67">
<mml:math id="m72">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf68">
<mml:math id="m73">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="left">&#x2014;</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf69">
<mml:math id="m74">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">0.74</td>
<td align="char" char=".">0.26</td>
<td align="left">&#x2014;</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf70">
<mml:math id="m75">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">0.13</td>
<td align="char" char=".">0.87</td>
<td align="left">&#x2014;</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>As alluded to above, there are numerous calculations necessary to extract important modeling results that guide interpretation. The intraclass correlation (ICC) estimate for this model indicates that about 16% proportion of variability in class assignment at Time 1 can be explained by the school effect. Using the variance decomposition, 10.4% of the variability in latent class membership at Time 2 can be accounted for by the school effect at Time 1. However,15.8% of the variability in latent class membership at Time 2 can be accounted for by the school effect at Time 2. The incorporation of school level covariates into <xref ref-type="disp-formula" rid="e3">Eq. 3</xref> and <xref ref-type="disp-formula" rid="e4">Eq. 4</xref> could give insight into what school characteristics are associated with latent class membership at Time 2 by investigating the change in the previous two <inline-formula id="inf71">
<mml:math id="m76">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> measures in models estimated with and without the covariate included. At the student level, the class assignment at Time 1 accounted for about 30.3% of the variability in class assignment at Time 2. The unexplained variability in class assignment at Time 1 also accounted for about 53% of the variability in class assignment at Time 2. Overall, the multilevel LTA model without any covariates explained approximately 46.5% of the variance in class assignment at Time 2. Finally, the inclusion of the multilevel structure explained about 16.2% of the variability in latent class membership at Time&#x20;2.</p>
</sec>
</sec>
<sec id="s4">
<title>4 Multilevel Latent Transition Analysis Variance Decomposition</title>
<p>Clearly, as indicated above, examining the proportion of variability that can be attributed to each component of the model can aid in interpreting the model effects. Although the parameter estimates provides some indication of the magnitude of model effects, the scale can make them difficult interpret. Furthermore, it is customary in traditional regression to report the proportion of variability explained by the model, and in multilevel models reporting the proportion of variability that is attributable to higher- and/or lower-level units can greatly inform inferences about the magnitude of effects of those units on the outcome(s) of interest. In this didactic model, for example, the estimated regression weight for the effect that latent class membership in Grade 3 had on latent class membership in Grade 5, controlling for school-level effects, was 2.93. Is this effect small, moderate, or large? It is difficult to make such a determination with the effect on this metric. Decomposing the variance and reporting the effect as a percentage makes the effect much easier to interpret. That is, the proportion of variability of <inline-formula id="inf72">
<mml:math id="m77">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> explained by <inline-formula id="inf73">
<mml:math id="m78">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is about 30.3%. Considering that the model explained about 46.5% of the total variability in <inline-formula id="inf74">
<mml:math id="m79">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, school-level variables accounted from 16.2% of the variance in <inline-formula id="inf75">
<mml:math id="m80">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. Thus, the school-level variables accounted for more than one-third of all the variability explained by the&#x20;model.</p>
<p>Due to the didactic nature of this paper, we refrain from commenting on any substantive conclusions regarding the size of this effect; rather, we seek to demonstrate how the variance decomposition produces a more intuitive, or at least familiar, effect size estimate. That said, certain <inline-formula id="inf76">
<mml:math id="m81">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>-like measures could be calculated for various effects in the model, including at each timepoint (i.e.,&#x20;transition) and for the overall model. Next, we demonstrate the steps required in the decomposing the model variance using the parameter estimates the multilevel LTA model (<inline-formula id="inf77">
<mml:math id="m82">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>0.76</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf78">
<mml:math id="m83">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1.91</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf79">
<mml:math id="m84">
<mml:mrow>
<mml:mtext>var</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.64</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf80">
<mml:math id="m85">
<mml:mrow>
<mml:mtext>var</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.97</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf81">
<mml:math id="m86">
<mml:mrow>
<mml:mi>&#x3b3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2.93</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf82">
<mml:math id="m87">
<mml:mrow>
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>0.19</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>) to calculate the effect sizes reported in the Results section as estimates of the proportions of variance explained in latent class membership at Time 2. Before presenting the steps in variance decomposition, we provide a section below to demonstrate the variance component derivations. The derivations are included to inform interested readers regarding the scale of the variances in the decomposition.</p>
<sec id="s4-1">
<title>4.1 Variance Component Derivations</title>
<p>To derive the variance components, the structural equations associated with the path diagram are needed. The structural equation are:<disp-formula id="e6">
<mml:math id="m88">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(6)</label>
</disp-formula>
<disp-formula id="e7">
<mml:math id="m89">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b2;</mml:mi>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(7)</label>
</disp-formula>
<disp-formula id="e8">
<mml:math id="m90">
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(8)</label>
</disp-formula>
<disp-formula id="e9">
<mml:math id="m91">
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mo>&#x2217;</mml:mo>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b3;</mml:mi>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
<mml:mo>&#x2217;</mml:mo>
</mml:msubsup>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(9)</label>
</disp-formula>The variances associated with these structural component are defined as follows. The variance of <inline-formula id="inf83">
<mml:math id="m92">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, the random effect at time 1, reduced the variance of the error term only, as <inline-formula id="inf84">
<mml:math id="m93">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is a constant.<disp-formula id="e10">
<mml:math id="m94">
<mml:mrow>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(10)</label>
</disp-formula>The remaining pieces are slightly more complex. For the variance of the random effect at time 2, a long form derivation is<disp-formula id="e11">
<mml:math id="m95">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b2;</mml:mi>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>&#x3b2;</mml:mi>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>&#x2102;</mml:mi>
<mml:mi mathvariant="double-struck">O</mml:mi>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>&#x3b2;</mml:mi>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(11)</label>
</disp-formula>It should be noted that we assumed that the covariance between the time 1 random effect and the time 2 random effect is&#x20;0.</p>
<p>The variance of the latent response tendency variable relative to the reference class 2 is defined as follows.<disp-formula id="e12">
<mml:math id="m96">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mn>3</mml:mn>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(12)</label>
</disp-formula>Again, we assumed that the error terms between the random effect at level 2 and the latent response tendency variable residual variance for the logistic regression have a covariance of 0. The residual variance of the latent response residual variance is a known constant of <inline-formula id="inf85">
<mml:math id="m97">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mn>3</mml:mn>
</mml:mfrac>
<mml:mo>&#x2248;</mml:mo>
<mml:mn>3.29</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>Lastly, the variance of the latent response tendency variable for time 2 is defined as<disp-formula id="e13">
<mml:math id="m98">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mo>&#x2217;</mml:mo>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b3;</mml:mi>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mo>&#x2217;</mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>&#x3b3;</mml:mi>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mo>&#x2217;</mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msup>
<mml:mi>&#x3b3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mn>3</mml:mn>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(13)</label>
</disp-formula>Again, the assumption of a covariance of 0 among the terms is imposed. The unique part of obtaining the variance of the&#x20;latent response variable at time 2 is that an indicator function is a part of the structural equation. An indicator function, <inline-formula id="inf86">
<mml:math id="m99">
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, is Bernoulli random variable with variance of <inline-formula id="inf87">
<mml:math id="m100">
<mml:mrow>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>t</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>t</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. Therefore, the variance of the indicator function in this case is a function of the size of class 1 (i.e.,&#x20;<inline-formula id="inf88">
<mml:math id="m101">
<mml:mrow>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>).</p>
<p>To summarize, the variance components are<disp-formula id="equ1">
<mml:math id="m102">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mo>&#x2217;</mml:mo>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mn>3</mml:mn>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mo>&#x2217;</mml:mo>
</mml:msubsup>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msup>
<mml:mi>&#x3b3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mn>3</mml:mn>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
</sec>
<sec id="s4-2">
<title>4.2 Compute <inline-formula id="inf89">
<mml:math id="m103">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>-Like Measures</title>
<p>The <inline-formula id="inf90">
<mml:math id="m104">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>-like measures that we can compute to help interpret the results from ML-LTA can therefore be defined as follows. First, a useful initial measure is the intraclass correlation, defined at time 1 as<disp-formula id="e14">
<mml:math id="m105">
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>C</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mn>3</mml:mn>
</mml:mfrac>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(14)</label>
</disp-formula>The ICC above will be a useful component to disentangle the variance of the latent response tendency variable at time 1. The <inline-formula id="inf91">
<mml:math id="m106">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>-like measures are as follows.</p>
<p>The estimate of the proportion of variance in latent class membership at time 2 (<inline-formula id="inf92">
<mml:math id="m107">
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>, the latent response tendency on logit scale) explained by the random effect at time 1 is<disp-formula id="e15">
<mml:math id="m108">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mn>.</mml:mn>
</mml:mrow>
</mml:math>
<label>(15)</label>
</disp-formula>The estimate of the proportion of variance in latent class membership at time 2 (<inline-formula id="inf93">
<mml:math id="m109">
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>) explained by residual variance of <inline-formula id="inf94">
<mml:math id="m110">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is<disp-formula id="e16">
<mml:math id="m111">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mn>.</mml:mn>
</mml:mrow>
</mml:math>
<label>(16)</label>
</disp-formula>The estimate of the proportion of variance in (<inline-formula id="inf95">
<mml:math id="m112">
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>) explained by residual of (<inline-formula id="inf96">
<mml:math id="m113">
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>) is<disp-formula id="e17">
<mml:math id="m114">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mn>3</mml:mn>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(17)</label>
</disp-formula>The estimate of the proportion of variance in <inline-formula id="inf97">
<mml:math id="m115">
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> explained by <inline-formula id="inf98">
<mml:math id="m116">
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> is<disp-formula id="e18">
<mml:math id="m117">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3b3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="double-struck">V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mo>&#x2217;</mml:mo>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(18)</label>
</disp-formula>The proportion of variance in <inline-formula id="inf99">
<mml:math id="m118">
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> explained by the model is the combination of all the variance components in the denominator minus the residual variance, that is<disp-formula id="e19">
<mml:math id="m119">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">model</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msup>
<mml:mi>&#x3b3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msup>
<mml:mi>&#x3b3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mn>3</mml:mn>
</mml:mfrac>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(19)</label>
</disp-formula>Lastly, the proportion of variance in <inline-formula id="inf100">
<mml:math id="m120">
<mml:mrow>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
<mml:mtext>&#x2a;</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> explained by adding the level-2 structure can be estimated as<disp-formula id="e20">
<mml:math id="m121">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msup>
<mml:mi>&#x3b3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="italic">Pr</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mn>3</mml:mn>
</mml:mfrac>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(20)</label>
</disp-formula>It should be noted that similar decomposition is possible for higher number of latent classes at each time point. However, the decomposition is more involved given the complexity of more transitions and random effects at level-2. Methods for expanding the results described above to <italic>k</italic>-class solutions are built on ideas similar to the random effects models for multinomial outcomes (<xref ref-type="bibr" rid="B10">Hedeker, 2003</xref>). We are currently developing the extension to three latent classes and intend to identify some concise patterns that will allow for relatively straightforward variance decomposition with more latent classes.</p>
</sec>
</sec>
<sec id="s5">
<title>5 Conclusion</title>
<p>In this paper, we have described multilevel latent transition analysis as an approach to investigating heterogeneous, nested data. This model has only recently seen increased use in psychological and educational research, but its use is still rather scarce. <xref ref-type="bibr" rid="B1">Asparouhov and Muth&#xe9;n (2008)</xref> introduced the multilevel LTA model more than a decade ago and have made recent contributions with LTA models that incorporate random intercepts (<xref ref-type="bibr" rid="B18">Muth&#xe9;n and Asparouhov, 2020</xref>). Thus, advances are being made with models and parameterizations to accommodate more complex data structures, nested longitudinal data from multiple underlying subpopulations (i.e.,&#x20;mixtures). When considering alternative models, the choice of modeling approach should, of course, be determined by one&#x2019;s guiding theoretical expectation(s) about the variables of interest. That said, models are also useful to the extent that they are interpretable. As noted, analysis of one&#x2019;s data using multilevel LTA can also help researchers classify individual cases into homogeneous groups in order to better understand complex sets of information. The use of classification of cases into homogeneous groups is important in the social sciences where identifying smaller subsets of like cases may be of particular interest. In presenting multilevel LTA, our goal was to increase researchers&#x2019; knowledge and confidence in using these models because nested data are ubiquitous in many educational and psychological research settings.</p>
<p>In order for this goal to be realized, the mechanics of the model and effect size estimation must be transparent. We believe this paper has served an important role in this respect because reporting the results in terms of proportions of variance explained by the various parts in the model is consistent with regression analysis, including multilevel modeling, and thus more familiar to a broader research audience. The contribution of this detailed decomposition of the variance components gives researchers another dimension for interpreting the results from multilevel LTA. The decomposition shown here also adds to the limited research of nested longitudinal data structures by providing guidance on how to understand one&#x2019;s complex data structure.</p>
<p>Being able to interpret the model results and effect size estimates is the necessary foundation for using multilevel LTA to study a broader set of phenomena. The model demonstrated here included two classes across two waves of data collection, which may generalize to the many research studies that use pre-post study designs in the social sciences, for example. The use of the multilevel LTA could also be expanded to include other types of relationships, such as using the smaller subsets of homogeneous groups as an outcome or predictor for more investigations (<xref ref-type="bibr" rid="B21">Nylund-Gibson et&#x20;al., 2019</xref>; <xref ref-type="bibr" rid="B2">Bakk and Kuha, in press</xref>). That is, latent class membership could be used to predict a distal outcome. For example, latent class membership could be modeled as a predictor of, say, high school graduation or academic achievement to investigate how early identification of problem behaviors relates to key educational milestones. In summary, multilevel LTA can be useful for investigating a longitudinal nested data structures. Researchers can then use the methods we described here to gain even more information about the within- and cross-level relationships among level-1 latent class membership and level-2 cluster effects. Future work is needed to provide relatively straightforward variance decomposition or models with more latent classes and across more timepoints.</p>
</sec>
</body>
<back>
<sec id="s6">
<title>Data Availability Statement</title>
<p>The data analyzed in this study is subject to the following licenses/restrictions: &#x201c;Restricted-Use Data from U.S. Department of Education&#x201d;. Requests to access these datasets should be directed to <email>iesdata.security@ed.gov</email>.</p>
</sec>
<sec id="s7">
<title>Author Contributions</title>
<p>All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.</p>
</sec>
<sec sec-type="COI-statement" id="s8">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="s9" sec-type="disclaimer">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Asparouhov</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Muth&#xe9;n</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2008</year>). &#x201c;<article-title>Multilevel Mixture Models</article-title>,&#x201d; in <source>Advances in Latent Variable Mixture Models</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Hancock</surname>
<given-names>G. R.</given-names>
</name>
<name>
<surname>Samuelsen</surname>
<given-names>K. M.</given-names>
</name>
</person-group> (<publisher-loc>Charlotte, NC</publisher-loc>: <publisher-name>Information Age</publisher-name>), <fpage>27</fpage>&#x2013;<lpage>52</lpage>. </citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bakk</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Kuha</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>in press</year>). <article-title>Relating Latent Class Membership to External Variables: An Overview</article-title>. <source>Br. J.&#x20;Math. Stat. Psychol.</source> <volume>74</volume>, <fpage>340</fpage>&#x2013;<lpage>362</lpage>. <pub-id pub-id-type="doi">10.1111/bmsp.12227</pub-id> </citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bakk</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Vermunt</surname>
<given-names>J.&#x20;K.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Robustness of Stepwise Latent Class Modeling with Continuous Distal Outcomes</article-title>. <source>Struct. Equation Model. A Multidisciplinary J.</source> <volume>23</volume> (<issue>1</issue>), <fpage>20</fpage>&#x2013;<lpage>31</lpage>. <pub-id pub-id-type="doi">10.1080/10705511.2014.955104</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Banfield</surname>
<given-names>J.&#x20;D.</given-names>
</name>
<name>
<surname>Raftery</surname>
<given-names>A. E.</given-names>
</name>
</person-group> (<year>1993</year>). <article-title>Model-based Gaussian and Non-gaussian Clustering</article-title>. <source>Biometrics</source> <volume>49</volume> (<issue>3</issue>), <fpage>803</fpage>&#x2013;<lpage>821</lpage>. <pub-id pub-id-type="doi">10.2307/2532201</pub-id> </citation>
</ref>
<ref id="B5">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Blumen</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Kogan</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Mccarthy</surname>
<given-names>P. J.</given-names>
</name>
</person-group> (<year>1955</year>). <source>The Industrial Mobility of Labor as a Probability Process</source>. <publisher-loc>Ithaca, NY</publisher-loc>: <publisher-name>Cornell University</publisher-name>.</citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chung</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Loken</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Schafer</surname>
<given-names>J.&#x20;L.</given-names>
</name>
</person-group> (<year>2004</year>). <article-title>Difficulties in Drawing Inferences with Finite-Mixture Models</article-title>. <source>The Am. Statistician</source> <volume>58</volume> (<issue>2</issue>), <fpage>152</fpage>&#x2013;<lpage>158</lpage>. <pub-id pub-id-type="doi">10.1198/0003130043286</pub-id> </citation>
</ref>
<ref id="B7">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Collins</surname>
<given-names>L. M.</given-names>
</name>
<name>
<surname>Lanza</surname>
<given-names>S. T.</given-names>
</name>
</person-group> (<year>2009</year>). <source>Latent Class and Latent Transition Analysis: With Applications in the Social, Behavioral, and Health Sciences</source>. <publisher-loc>Hoboken, NJ</publisher-loc>: <publisher-name>John Wiley &#x26; Sons</publisher-name>.</citation>
</ref>
<ref id="B8">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Everitt</surname>
<given-names>B. S.</given-names>
</name>
</person-group> (<year>1993</year>). <source>Cluster Analysis</source>. <edition>3rd ed.</edition> <publisher-loc>Hoboken, NJ</publisher-loc>: <publisher-name>John Wiley &#x26; Sons</publisher-name>.</citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goodman</surname>
<given-names>L. A.</given-names>
</name>
</person-group> (<year>1961</year>). <article-title>Statistical Methods for the Mover-Stayer Model</article-title>. <source>J.&#x20;Am. Stat. Assoc.</source> <volume>56</volume> (<issue>296</issue>), <fpage>841</fpage>&#x2013;<lpage>868</lpage>. <pub-id pub-id-type="doi">10.1080/01621459.1961.10482130</pub-id> </citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hedeker</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2003</year>). <article-title>A Mixed-Effects Multinomial Logistic Regression Model</article-title>. <source>Stat. Med.</source> <volume>22</volume> (<issue>9</issue>), <fpage>1433</fpage>&#x2013;<lpage>1446</lpage>. <pub-id pub-id-type="doi">10.1002/sim.1522</pub-id> </citation>
</ref>
<ref id="B11">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kaplan</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>J.-S.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>S.-Y.</given-names>
</name>
</person-group> (<year>2011</year>). &#x201c;<article-title>Multilevel Latent Variable Modeling: Current Research and Recent Developments</article-title>,&#x201d; in <source>The Sage Handbook of Quantitative Methods in Psychology</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Millsap</surname>
<given-names>R. E.</given-names>
</name>
<name>
<surname>Maydeu-Olivares</surname>
<given-names>A.</given-names>
</name>
</person-group> (<publisher-loc>Thousand Oaks, California</publisher-loc>: <publisher-name>SAGE Publications</publisher-name>), <fpage>592</fpage>&#x2013;<lpage>612</lpage>. </citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lanza</surname>
<given-names>S. T.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Bray</surname>
<given-names>B. C.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Latent Class Analysis with Distal Outcomes: A Flexible Model-Based Approach</article-title>. <source>Struct. Equ Model.</source> <volume>20</volume> (<issue>1</issue>), <fpage>1</fpage>&#x2013;<lpage>26</lpage>. <pub-id pub-id-type="doi">10.1080/10705511.2013.742377</pub-id> </citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lubke</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Neale</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Distinguishing between Latent Classes and Continuous Factors with Categorical Outcomes: Class Invariance of Parameters of Factor Mixture Models</article-title>. <source>Multivariate Behav. Res.</source> <volume>43</volume> (<issue>4</issue>), <fpage>592</fpage>&#x2013;<lpage>620</lpage>. <pub-id pub-id-type="doi">10.1080/00273170802490673</pub-id> </citation>
</ref>
<ref id="B14">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>McLachlan</surname>
<given-names>G. J.</given-names>
</name>
<name>
<surname>Basford</surname>
<given-names>K. E.</given-names>
</name>
</person-group> (<year>1988</year>). <source>Mixture Models: Inference and Applications to Clustering</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>M. Dekker</publisher-name>.</citation>
</ref>
<ref id="B15">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>McLachlan</surname>
<given-names>G. J.</given-names>
</name>
<name>
<surname>Peel</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2004</year>). <source>Finite Mixture Models</source>. <publisher-loc>Hoboken, NJ</publisher-loc>: <publisher-name>John Wiley &#x26; Sons</publisher-name>.</citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>McNeish</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Stapleton</surname>
<given-names>L. M.</given-names>
</name>
<name>
<surname>Silverman</surname>
<given-names>R. D.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>On the Unnecessary Ubiquity of Hierarchical Linear Modeling</article-title>. <source>Psychol. Methods</source> <volume>22</volume> (<issue>1</issue>), <fpage>114</fpage>&#x2013;<lpage>140</lpage>. <pub-id pub-id-type="doi">10.1037/met0000078</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morgan</surname>
<given-names>G. B.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Mixed Mode Latent Class Analysis: An Examination of Fit index Performance for Classification</article-title>. <source>Struct. Equation Model. A Multidisciplinary J.</source> <volume>22</volume> (<issue>1</issue>), <fpage>76</fpage>&#x2013;<lpage>86</lpage>. <pub-id pub-id-type="doi">10.1080/10705511.2014.935751</pub-id> </citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Muth&#xe9;n</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Asparouhov</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Latent Transition Analysis with Random Intercepts (RI-LTA)</article-title>. <source>Psychol. Methods</source>. <pub-id pub-id-type="doi">10.1037/met0000370</pub-id> </citation>
</ref>
<ref id="B19">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Muth&#xe9;n</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Muth&#xe9;n</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2017</year>). <source>Mplus User&#x2019;s Guide</source>. <edition>8th</edition>. <publisher-loc>Los Angeles, CA</publisher-loc>: <publisher-name>Muth&#xe9;n &#x26; Muth&#xe9;n</publisher-name>.</citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nylund</surname>
<given-names>K. L.</given-names>
</name>
<name>
<surname>Asparouhov</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Muth&#xe9;n</surname>
<given-names>B. O.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>Deciding on the Number of Classes in Latent Class Analysis and Growth Mixture Modeling: A Monte Carlo Simulation Study</article-title>. <source>Struct. Equation Model. A Multidisciplinary J.</source> <volume>14</volume> (<issue>4</issue>), <fpage>535</fpage>&#x2013;<lpage>569</lpage>. <pub-id pub-id-type="doi">10.1080/10705510701575396</pub-id> </citation>
</ref>
<ref id="B21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nylund-Gibson</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Grimm</surname>
<given-names>R. P.</given-names>
</name>
<name>
<surname>Masyn</surname>
<given-names>K. E.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Prediction from Latent Classes: A Demonstration of Different Approaches to Include Distal Outcomes in Mixture Models</article-title>. <source>Struct. Equation Model. A Multidisciplinary J.</source> <volume>26</volume> (<issue>6</issue>), <fpage>967</fpage>&#x2013;<lpage>985</lpage>. <pub-id pub-id-type="doi">10.1080/10705511.2019.1590146</pub-id> </citation>
</ref>
<ref id="B22">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Stapleton</surname>
<given-names>L. M.</given-names>
</name>
</person-group> (<year>2013</year>). &#x201c;<article-title>Multilevel Structural Equation Modeling with Complex Sample Data</article-title>,&#x201d; in <source>Structural Equation Modeling: A Second Course</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Hancock</surname>
<given-names>G. R.</given-names>
</name>
<name>
<surname>Mueller</surname>
<given-names>R. O.</given-names>
</name>
</person-group>. <edition>2nd</edition> (<publisher-name>Information Age Publishing</publisher-name>), <fpage>521</fpage>&#x2013;<lpage>562</lpage>. </citation>
</ref>
<ref id="B23">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Tofighi</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Enders</surname>
<given-names>C. K.</given-names>
</name>
</person-group> (<year>2008</year>). &#x201c;<article-title>Identifying the Correct Number of Classes in Growth Mixture Models</article-title>,&#x201d; in <source>Advances in Latent Variable Mixture Models</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Hancock</surname>
<given-names>G. R.</given-names>
</name>
<name>
<surname>Samuelson</surname>
<given-names>K. M.</given-names>
</name>
</person-group> (<publisher-loc>Greenwich, CT</publisher-loc>: <publisher-name>Information Age</publisher-name>), <fpage>317</fpage>&#x2013;<lpage>341</lpage>. </citation>
</ref>
<ref id="B24">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Tourangeau</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Nord</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>L&#xea;</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Sorongon</surname>
<given-names>A. G.</given-names>
</name>
<name>
<surname>Najarian</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2009</year>). <source>Early Childhood Longitudinal Study, Kindergarten Class of 1998&#x2013;99 (ECLS-K), Combined User&#x2019;s Manual for the ECLS-K Eighth-Grade and K&#x2013;8 Full Sample Data Files and Electronic Codebooks (NCES 2009&#x2013;004)</source>. <publisher-loc>Washington, D.C.</publisher-loc>: <publisher-name>National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education</publisher-name>.</citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tueller</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Drotar</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lubke</surname>
<given-names>G. H.</given-names>
</name>
</person-group> (<year>2011</year>). <article-title>Addressing the Problem of Switched Class Labels in Latent Variable Mixture Model Simulation Studies</article-title>. <source>Struct. Equation Model. A Multidisciplinary J.</source> <volume>18</volume> (<issue>1</issue>), <fpage>110</fpage>&#x2013;<lpage>131</lpage>. <pub-id pub-id-type="doi">10.1080/10705511.2011.534695</pub-id> </citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vermunt</surname>
<given-names>J.&#x20;K.</given-names>
</name>
<name>
<surname>Langeheine</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Bockenholt</surname>
<given-names>U.</given-names>
</name>
</person-group> (<year>1999</year>). <article-title>Discrete-Time Discrete-State Latent Markov Models with Time-Constant and Time-Varying Covariates</article-title>. <source>J.&#x20;Educ. Behav. Stat.</source> <volume>24</volume> (<issue>2</issue>), <fpage>179</fpage>&#x2013;<lpage>207</lpage>. <pub-id pub-id-type="doi">10.3102/1076998602400217910.2307/1165200</pub-id> </citation>
</ref>
<ref id="B27">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Vermunt</surname>
<given-names>J.&#x20;K.</given-names>
</name>
</person-group> (<year>2004</year>). &#x201c;<article-title>Multilevel Mixture Models</article-title>,&#x201d; in <source>The Sage Encyclopedia of Research Methods for the Social Sciences</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Lewis-Beck</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Bryman</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Liao</surname>
<given-names>T. F.</given-names>
</name>
</person-group> (<publisher-loc>Thousand Oaks, CA</publisher-loc>: <publisher-name>Sage</publisher-name>), <fpage>665</fpage>&#x2013;<lpage>666</lpage>. </citation>
</ref>
</ref-list>
</back>
</article>