Emerging School Readiness Profiles: Motor Skills Matter for Cognitive- and Non-cognitive First Grade School Outcomes

A promising approach for studying school readiness involves a person-centered approach, aimed at exploring how functioning in diverse developmental domains conjointly affects children’s school outcomes. Currently, however, a systematic understanding lacks of how motor skills, in conjunction with other school readiness skills, affect a child’s school outcomes. Additionally, little is known about longitudinal associations of school readiness with non-academic (e.g., socioemotional) school outcomes. Therefore, we examined the school readiness skills of a sample of Dutch children (N = 91) with a mean age of 3 years and 4 months (46% girls). We used a multi-informant test battery to assess children’s school readiness in terms of executive functions (EFs), language and emergent literacy, motor skills, and socioemotional behavior. During the spring term of a child’s first grade year, we collected academic and non-academic (i.e., EFs, motor skills, socioemotional- and classroom behavior, and creative thinking) school outcomes. A latent profile analysis revealed four distinct profiles. Children in the “Parent Positive” (29%) profile were rated positively by their parents, and performed variably on motor and language/emergent literacy skills tests. The second profile–“Multiple Strengths” (13%)–consisted of children showing strengths in multiple domains, especially with respect to motor skills. Children from the third profile–“Average Performers” (50%)–did not show any distinct strengths or weaknesses, rather displayed school readiness skill levels close to, or just below the sample mean. Finally, the “Parental Concern” (8%) profile was characterized by high levels of parental concerns, while displaying slightly above average performance on specific motor and language skills. Motor skills clearly distinguished between profiles, next to parent-rated EFs and socioemotional behavior, and to a lesser extent emergent literacy skills. School readiness profiles were found to differ in mean scores on first grade academic achievement, parent- and teacher-rated EFs, motor skills, parent-rated socioemotional functioning, and pre-requisite learning skills. The pattern of mean differences was complex, suggesting that profiles could not be ranked from low to high in terms of school outcomes. Longitudinal studies are needed to disentangle the interaction between emerging school readiness of the child and the surrounding context.


S1: Specification of and results from data summarizing and planned missing design
As preparation for the decision process concering data summarizing we ran bivariate correlations between all school readiness indicators. These results are presented in Table S1 (see next page).
For all CFA's we found missingness with respect to manifest indicators to be either MCAR (for first grade performance-based EFs, as indicated by a non-significant Little's MCAR test [χ 2 (86) = 89.17, p = .39]) or MAR (for school readiness performance-based EF's and academic achievement. MAR was assumed in these cases as several observed variables (e.g., age) were found to be related to missingness on both EFs and academic achievement. Accordingly, missingsness was handled by means of Full Information Maximum Likelihood (FIML) in all CFA's.
For all CFA's, model fit was assessed as follows. Good model fit was indicated by a combination of a non-significant χ 2 , a ratio of χ 2 to degrees of freedom (χ 2 /df) ≤ 2, a Root Mean Square Error of Approximation (RMSEA) and Standardized Root Mean Square Residual (SRMR) < .08, and the Tucker-Lewis Fit Index (TFI) and Comparative Fit index (CFI) > .90. We used the robust maximum likelihood estimator (MLR) to account for deviations from normality by several of the EFs and academic achievement subtests. The latent factor metric was defined by fixing the factor variance to one in all models. Note. A one-factor model was tested with all five EFs performance-based tests loaded onto one latent EFs factor. No multivariate outliers were present according to non-significant Mahalanobis distances. RMSEA = root-mean-square error of approximation; TFI = Tucker Lewis index; CFI = comparative fit index; SRMR = standardized root-mean-square residual.

S1.1 CFA school readiness EFs performance-based tests
2  The one-factor model with all academic achievement tests loaded onto one academic achievement factor showed a good overall model fit. Inspection of Mahalanobis distances identified 3 potential multivariate outliers, of which Cooks' distances were slightly over one, suggesting some influence of these outliers. Inspection of raw data revealed that these cases were valid (i.e., no error outliers). As a sensitivity check we reran the model with these potential outliers removed, and found no substantial differences concerning model fit and standardized factor loadings (see table S1.2.2). Subsequently, we used the model with outliers for subsequent analyses.  Scales; LWD = logical-mathematical thinking subtest; TE = expressive language subtest; TR = receptive language subtest; Cito = nationwide tests for monitoring yearly progress of Dutch primary school students; DMT = three minute reading test; RW = mathematical test. a In this model all seven items of academic achievement were loaded onto one factor. b The same model was tested, but with outliers removed. * p < .05 ** p < .01 *** p < .001.

S1.3 Planned missing design and CFA first grade performance-based EFs tests
As a result, such designs increase validity of measurement, because participants are prevented from being overburdened. We created three EF battery versions, such that the covariance coverage matrix did not include any zeros on the off-diagonals ( (Silvia et al., 2014) see Table  S1.3.1). These batteries were randomly distributed across children. The randomization of missingness enables reliable and unbiased FIML estimation methods. That is, to confirm that EF could be represented by one latent factor, as suggested by the IDS-2 manual, we conducted a one-factor CFA with the standard scores of each subtest as manifest indicators. As shown in table S4, the one factor model showed a good model fit, yet a TFI above one might suggest overfitting of the model (Kline, 2015). Therefore we trimmed the model, by constraining nonsignificant factor loadings to zero. Most fit indices indicated that this adjusted model fitted our data well, except from the SRMR. Based on the modification indices, we allowed the factor loading of 'Crossing roads' to be freely estimated. This is in line with the finding that only (and not the other 5) this subtest is aimed at tapping into higher-order EFs and might therefore add complementary to the overall EF latent factor (Baggetta and Alexander, 2016). This model resulted in an overall good model fit (see table S1.3.2). Note. RMSEA = root-mean-square-error of approximation; TFI = Tucker Lewis Index; CFI = Comparative Fit Index; SRMR = Standardized Root Mean Square Residual. a In model 1 all 6 performance-based executive functions (EFs) tests were loaded onto one EFs_test laten factor. b In model 2, the non-significant factor loadings from model 1 ('Picture Recognition', 'Naming Animal Colors', 'Crossing Roads') were constraint to zero. c In model 3, the factor loading of the subtest 'Crossing roads' was freely estimated again. * p <.05 ** p < .01 *** p < .001 7 S2: Overview of missing data on school readiness and school outcome data  Note. AA = Academic achievement factor score; EFs = Executive functions; EF = Executive functions factor score; BRIEF = Behavior Rating Inventory of Executive Functioning, parent and teacher total T-score; IDS-2 SEC = Intelligence-and Developmental Scales Second Version Socioemotional Competencies standard score; CBCL = Child Behavior Checklist Total Problems T-score; TRF = Teacher Report Form Total Problems T-score; LVT = Leervoorwaarden test, all subscales concern gendercorrected z-standardized scores; TCT-DP = Test of Creative Thinking -Drawing Production total raw score.

S3: Specification of latent profile analysis
During all latent profile analyses, means were freely estimated within every profile, while variances were constrained to be equal. To ensure a stable solution, that is: avoid a local maximum, we carried out two runs for each model. For the first, we requested 100 sets of initial starting values, and 20 for the optimization stage of the maximum likelihood estimation. During the second run, we doubled these starting values, and checked the replication of the best log likelihood afterwards. Additionally, we verified the assumption of local independence, by inspecting independence of school readiness indicators within profiles. The seeds of each stable solution were specified in subsequent model runs, using the 'OPTSEED' option of Mplus. In these subsequent runs we requested for the parametric bootstrapped likelihood ratio test. Again, the number of starting values were doubled to check if results were sensitive to the number of random starts. Figure S3. Patterns of mean z-standardized scores of children's school readiness skills per profile.

Results of sensitivity analyses
As recommended by Tein et al. (2013), we applied an iterative approach, by testing which indicators added significantly to profile separation. That is, we carried out Wald's tests to examine mean differences between profiles for each school readiness indicator. Next, we removed those indicators that did not add significantly (i.e., omnibus test was not significant and/or was of a small effect size) to profile separation, and reran all previous analysis steps.