Impact Factor 2.990 | CiteScore 3.5
More on impact ›


Front. Psychol., 16 October 2020 |

Convergent and Discriminant Validities of SCBE-30 Questionnaire Using Correlated Trait–Correlated Method Minus One

  • 1William James Center for Research, ISPA – Instituto Universitário, Lisbon, Portugal
  • 2Instituto Universitário de Lisboa (ISCTE-IUL), CIS-IUL, Lisbon, Portugal
  • 3Family and Child Development, Auburn University, Auburn, AL, United States

Correlated trait–correlated method minus one was used to evaluate convergent and discriminant validity of Social Competence Behavior Evaluation questionnaire (Social Competence, Anger-Aggression, Anxiety-Withdrawal) between multiple raters. A total of 369 children (173 boys and 196 girls; Mage = 55.85, SDage = 11.54) were rated by their mothers, fathers, and teachers. Results showed more convergence between parents than parent-teacher ratings. Mother-teacher share a common view of child behavior that is not shared with father. Parents had more difficulty distinguishing internalizing and externalizing behaviors (especially fathers). Measurement invariance across child sex was explored, results imply that differences between boys and girls were not due to measure. Girls (compare to boys) were described as more social competent by their fathers and teachers, while boys as more aggressive by mothers and teachers.


Social Competence Behavior Evaluation questionnaire (SCBE-30) is a rating scale on affective quality of children’s relationships with peers and significant adults, providing a standardized description of affect and behavior in context, discriminating behavioral-emotional problems and social adjustment (LaFreniere and Dumas, 1996). It has been used with children from 30 to 78 months, in different international settings, cross-sectional, and longitudinal research (LaFreniere et al., 2002). Correlated trait–correlated method minus one [CT-C(M−1)] (Eid, 2000; Eid et al., 2003), a multiple-trait by multiple-method (MTMM) approach, was used to examine convergent and discriminant validity of SCBE-30 between mother, father, and teacher.

Social Competence Behavior Evaluation questionnaire contains three scales with 10-items each: two distinct patterns of maladaptive behavior, Anger-Aggression (AA) and Anxiety-Withdrawal (AW); and one adaptive pattern, Social Competence (SC). Presenting good internal consistency across different countries, 0.87 for SC, 0.88 for AA, and 0.84 for AW (LaFreniere et al., 2002). It has been widely used in research, educational and clinical settings with demonstrated validity across cultural settings (Zupancic et al., 2000; Chen and Jiang, 2002; Kotler and McMahon, 2002; LaFreniere et al., 2002; Dumas et al., 2011; Klyce et al., 2011; Sette et al., 2014; Vasquez-Echeverria et al., 2016; Bárrig and Parco, 2017). However, most studies used only one rater (usually teacher) and few compared teacher with parents (mostly mother). Klyce et al. (2011) reported that although presenting identical factor structures, parent, and teachers showed low agreement when rating children’s behaviors. Munzer et al. (2018) also reported low parent-teacher concordance, especially among girls. Studies regarding children’s social behaviors ratings, also reported low agreement between informants (Achenbach et al., 1987; Winsler and Wallace, 2002; Konold et al., 2004; Reyes and Kazdin, 2005). Parents and teachers agree more on problem behaviors than on social skills, and more on externalizing than internalizing behaviors (Achenbach et al., 1987; Winsler and Wallace, 2002). Also, parents (more than teachers) rate children as having more behavior problems (Berg-Nielsen et al., 2012).

Rating scales implies that raters judge how a child typically behaves in comparison with others, constructing their valuations retrospectively (based on memory, which could be biased). It could also be influenced by their knowledge, beliefs and language, as well as by social values attributed to the behavior. It reflects raters’ ideas and representations (Uher et al., 2013). In rating scales, the item statements and answer categories involved encoding schemes (i.e., variables and values), they include adjectives from everyday language allowing raters to interpret the item meaning. However, they are often ambiguous and context-sensitive. Campbell and Fiske (1959) stated that psychological variable’s score reflects not only the psychological construct under consideration, but also systematic method-specific influences and they demonstrated the necessity to include at least two different methods (that should converge when measuring the same trait) to separate trait from method influences.

Most studies assessed inter-rater agreement by correlating ratings, but even highly correlated data could present poor agreement. MTMM analysis allows the study of multiple traits measured by multiple methods and evaluate convergent and discriminant validity more robustly (Lance et al., 2002). Confirmatory factor analysis (CFA) is one of the most common methods to analyze MTMM data (Eid et al., 2006) and allows to calculate correlations among latent factors rather than observed variables, accounting for measurement error. According to Eid et al. (2008) when selecting a CFA model for MTMM analysis the key issue is the type of methods in the model. Methods can be either interchangeable (i.e., all raters have same access to the target, therefore the target is rated from the same perspective) or structurally different (all raters have different access to the target, responding from different perspectives). We selected CT-C(M−1) to compare and contrast our structurally different methods, with each SCBE-30 trait being represented by multiple indicators. Parents and teachers were considered structurally different and fixed for each child. We specifically selected different raters (mother, father, and teacher) to evaluate same child on multiple traits (SC, AA, AW) recognizing that each one has a unique perspective and access to partially overlapping information of child behavior. Since CT-C(M−1) model is not symmetrical the meaning of the parameters of the model depends on the method chosen as the reference standard (Geiser et al., 2008). One method is selected as reference (reference rater) and its true-scores indicators are used to predict true-scores indicators of non-reference (other raters). If we choose mother ratings as the reference method in the CT-C(M−1) model, we are evaluating the convergence of mother ratings with teacher ratings and father-ratings. This analysis will not show how teacher ratings and father-ratings converge with each other. A second analysis is needed in which teacher ratings is the reference method and father-ratings are one of the non-reference method, or father-ratings is the reference method and teacher ratings is one of the non-reference method. Convergence between methods is inferred by consistency coefficients of non-reference methods, reflecting shared variance. Whereas method-specific coefficient reflects the proportion of variance in non-reference methods that is not predicted by true-score of reference method. For a multidimensional rating scale as SCBE-30, subscale convergent validity is inferred when there are relatively high monotrait–heteromethod correlations (same subscale across different raters) whereas discriminant validity is inferred by relatively low heterotrait–monomethod correlations (different subscales within raters), and method effects (raters effects) are inferred when correlations for subscales within a method are larger than correlations across methods but within traits (Lance et al., 2002). Since reference method selection influences trait and method factors meanings (Geiser et al., 2008) three different analyses were conducted. In first analysis (analysis1) teacher was used as reference, in the second (analysis2) mother, and father in the third (analysis3). Conducting these complementary analyses allowed comparations between all raters (analysis1, 2, or 3), but also to contrast teacher with parents’ ratings (analysis1), mother with teacher/father (analysis2), and father with teacher/mother (analysis3).

Using a multiple-group confirmatory factor analysis (MG-CFA) sex measurement invariance (MI) was tested. Literature points some differences although to our knowledge only Munzer et al. (2018) evaluated MI. In most studies girls are rated higher than boys on SC and lower on AA (LaFreniere et al., 2002; Masataka, 2002; Venet et al., 2002; Torres et al., 2014; Vasquez-Echeverria et al., 2016). No sex differences we reported regarding AW, except for two studies (Chen and Jiang, 2002; Blair et al., 2004) where boys were rated higher by their teachers. Bárrig and Parco (2017) found no sex differences.

Based on exiting data, we expect more convergence between mother–father ratings than parent–teacher ratings and for externalizing more than internalizing problem behavior.

Materials and Methods


Participants were parents and teachers of 369 children (173 boys and 196 girls, ages ranged from 32 to 78 months, M = 55.85, SD = 11.54, 55.8% firstborns, and 63.7% had siblings). All attended public preschools. Each one of the 45 classes had on average 20 children (19 to 24), all families were invited to participate (one child per household).

Most parents were married or cohabiting (95.1%). Mothers age ranged between 21 and 47 years (M = 33.53; SD = 6.70), and fathers from 23 to 55 (M = 35.97; SD = 7.28). Mothers education level varied between 4 and 21 years (M = 11.94; SD = 4.59) and fathers between 1 and 19 (M = 10.34; SD = 4.64). Most parents work full-time (mother M = 38.39 h; SD = 7.34, 20.6% unemployed; fathers M = 41.60; SD = 7.23, 9.8% unemployed).

All 45 participating teachers were female, with age between 41 and 50 years (M = 44.82; SD = 2.75). All had a university degree in early education and 21 to 25 years of experience.


Stratified random sampling was used to select, the population was divided into 20 groups corresponding to Portugal’ regions. Within each region, a random number table was used to determine the schools to be contacted. From total of 63 schools, 30 consented to participate, and 45 classes contribute to the study. Parents were asked to complete questionnaires independently. Forty-one percent of the questionnaires were returned with all the information, and the consent for teachers to report on the child behavior (one per family). Teachers rated consented children (middle to the end of the year to guarantee that they were well acquainted with the child), resulting in 369 ratings with complete (usable) sets of mother, father, and teacher ratings. Only completed sets of mother, father, and teacher were analyzed.


Social Competence and Behavior Evaluation Scale

Evaluates patterns of social competence, emotion regulation and expression, as well as adjustment difficulties in children between 30 to 78 months (LaFreniere and Dumas, 1996). It intends to describe behavioral tendencies of socialization rather than to classify children. It has three 10-items scales that allow the assess to the overall quality of the child’s adaptation including their strengths as well as their weaknesses: (1) SC, referring to prosocial behaviors; (2) AA, referring to externalizing behaviors; and (3) AW, referring to internalizing behavior. Responses range from 1 (never) to 6 (always). SCBE-30 was translated from the original English version into Portuguese following the procedures outlined by “Committee Approach” (Brislin, 1980).

Data Analysis

For missing Little’s MCAR statistic was computed (χ2 = 1402.56, df = 1344, p = 0.13) and estimation maximization algorithm (EM) was used. Confirmatory factor analyses (CFA) and MI were performed using the R packages Lavaan (Rosseel, 2012), SemTools (Jorgensen et al., 2018) to evaluate SCBE-30 three factor model fit. Given data ordinal nature, we used Robust Weighted Least Squares (RWLS) (Flora and Curran, 2004) and configural invariance was evaluated using three robust indices (Hu and Bentler, 1999; Brosseau-Liard et al., 2012): robust Comparative Fit Index (CFI, ≥0.95 good and ≥0.90 acceptable); robust Root Means Square Error Approximation (RMSEA, ≤0.06 good and ≤0.08 acceptable) and Weighted Root Mean Square Residual (WRMR, ≤1.0 good, with lower values indicating better fit; Yu and Muthén, 2002). For model fit improvement, factor loadings were considered (<0.40 poor) (Hair et al., 1998). We considered a global model (M1, not distinguishing who answered the questioners – mothers, fathers, or teachers). Since we were interested on comparing and contrasting mother’s, father’s, and teacher’s responses we modify the model to include that distinction (M2). Because the same child was being reported by parents and teacher, we explore the dependency of the observations by correlating the residual covariance between same indicator across parents and teachers (M3).

Correlated trait–correlated method minus one model (Eid et al., 2008) was selected to test for MI across our structurally different raters, comparing and contrasting them against each other (Eid et al., 2006). This model implies that the trait cannot be measured independently of the method (rater), with each observed variable (item) representing a trait-method unit. By contrasting different methods against each other the convergent validity of the different methods can be determined. CT-C(M−1) includes two types of latent variables: a reference factor, representing the trait as measured by the reference method; and a method factor, representing the residual variance in the non-reference method (not shared with the reference factor within the same trait). Non-reference methods are contrasted against the reference factor. Since method factors are defined as regression residuals, reference and method factors for the same trait are uncorrelated. To create our models, all indicators of reference method (teacher in analysis1, mother in analysis2, and father in analysis3) were linked to appropriate trait factors but not to any method factor. For non-reference methods (mother and father ratings in analysis1; father and teacher ratings in analysis2, mother and teacher ratings in analysis3) indicators were linked to appropriate trait factors and method factors. The trait factors were correlated with each other and same happened to method factors, whereas method and trait factors were assumed to be uncorrelated. High trait loadings of non-reference method and comparatively low method loadings of non-reference methods indicate more agreement with reference method. Method factor will be the common residual factor, representing the proportion of a trait measured by non-reference method that cannot be predicted by reference true-scores. Proportion of variance shared with reference model is given by square standardized loadings of non-reference indicators onto reference factor. The rater-specific variance that cannot be predicted by true-score variable of the indicator measured by reference method is given by squared standardized loadings of non-reference indicators on method factors (method-specific coefficient). Total reliable variance of an indicator (reliability coefficient) is given by the sum of the consistency plus method-specific coefficients (Eid et al., 2003).

Sex invariance was tested using MG-CFA, we analyzed configural invariance (factor structure with same items being associated with same construct), metric invariance (raters use questionnaires scales in similar, presenting equivalent loadings), and scalar invariance (equivalent items thresholds) (Geiser et al., 2014). When differences in fit indices (ΔCFI and ΔRMSEA) between a model and the (preceding) less constrained model was ≤0.01 for ΔCFI and ≤0.015 for ΔRMSEA level of MI was achieved (Chen, 2007). Latent mean differences between child sex (for all raters on SCBE-30 dimensions) was compared using a full scalar invariance model as the baseline.


Confirmatory Factor Analyses of SCBE-30

Prior to our main analyses we examined items distributions (see Table 1). To evaluate model fit and consistency with data we performed a CFA, using RWLS. As showed in Table 2, initial model (M1) using all 30 items organized in three factors (not considering different raters) did not present an acceptable fit. For model improvement, we had in consideration that there were three different raters (M2), and in M3 we added residual covariances between raters’ related items as they were describing the same child (see Table 1 for residual covariance). In M4 we dropped item8 “sad” was left skewed for all raters (mother Sk = 3.20 Ku = 13.09; father Sk = 2.28 Ku = 6.21; teacher Sk = 2.65 Ku = 9.58). In following models, we eliminated two items presenting low factor loadings (λ < 0.40) for all raters: M5 we dropped item6 “worries,” values were unexpectedly high (specially for parents) and modification indices suggest better fit on SC; M6 we removed item13 “negotiates solutions to conflicts,” values were low and modification indices suggest better fit on AA or AW. In the following models we gradually eliminated two more items presenting low factor loadings for two of the raters: M7 we deleted item1 “neutral expression” that presented low factor loading for fathers and teachers; M8 we dropped item2 “tired” that presented low factor loadings for parents, an acceptable fit (robust CFI = 0.91, RMSEA = 0.038, and WRMR = 1.35) was achieved.


Table 1. SCBE-30 items distributions considering mothers, fathers, and teachers (N = 369).


Table 2. Robust fit indices for SCBE-30 CFA models.

Measurement Invariance Across Mother, Father, and Teacher

In first CT-C(M−1) we used teachers as reference method as major differences were excepted between parents-teacher (see Geiser et al., 2012 for guidelines). We used M8 but it did not converge, three items could not be obtained and were excluded (item3 “easily frustrated,” item4 “angry when interrupted” and item5 “irritable” all from AA) [CFI = 0.91, TLI = 0.89; RMSEA = 0.042 (0.040; 0.045), SRMS = 0.063, WRMR = 1.18]. Results showed low trait loadings (<0.40, except for item16 “hits”) and comparatively high (>0.60) method factor loadings suggesting low agreement between parents and teacher. These interpretations were confirmed by the low reliabilities (SC: mother 0.28 to 0.52, father 0.29 to 0.48; AA: mother 0.28 to 0.59, father 0.29 to 0.55; AW: mother 0.35 to 0.64, father 0.29 to 0.57) and because method-specific coefficients were higher than consistency coefficients for all items. We found high associations (0.71 to 0.82) between parents when considering the same trait, showing that parents share a common view of child behavior that is not shared with teachers. Since method effect goes in the same direction for both parents (positive correlations), when mothers over or underestimated child behavior (comparing to teacher) fathers do the same. The absolute values of correlations between method factors belonging to same method but different traits are mostly low (all < 0.20) for parents except when relating AW with AA (0.36 for fathers and 0.50 for mothers) these traits could be method biases, when parents overestimate AW also overestimate AA.

In analysis2, with mother as reference method, robust fit indices were good [CFI = 0.95, RMSEA = 0.033 (0.030; 0.036), SRMS = 0.057, WRMR = 0.95]. As in previous analysis, trait loadings within teacher trait factor were weak (<0.40) (except item16 “hits”) comparative to method factor loadings (all > 0.60). However, father’s trait loadings were above 0.52 (except for item28 “opposes”) and method factor presented lowest values. Indicators had larger consistency than method specificity coefficients, meaning that (as in analysis1) there is good support for mother-father, but not for mother-teacher. There was no significant association between method factors belonging to same trait showing that father and teacher do not share a common view of child behavior besides the one shared in mother. Absolute values of associations between method factors belonging to same method but different traits were significant for teachers (0.18 to 0.51). AW and AA were positively correlated (but low) besides correlation between SC and these two affective traits were negative. Comparing to mothers, teachers who overestimate AW also overestimate AA, and when overestimated AA or AW they underestimate SC. For fathers only the relation between AW and AA was significant (r = 0.58) meaning that (comparing to mothers) fathers who under or overestimate AW also do it for AA.

Finally, analysis3 with father as reference, robust fit indices were good [CFI = 0.94, RMSEA = 0.034 (0.031; 0.037), SRMS = 0.057, WRMR = 0.96]. As in previous analysis, teacher’s trait loadings within factor were weak (all < 0.40) although strong (all > 0.60) when considering method factor loadings. Again, when we compare parents most of trait loadings were good, indicators had a larger consistency coefficient than method specificity coefficient (except item30 “pleasure in own accomplishments,” items 9“inhibited,” 14 “isolated,” and 16 “hits”), there is good support for convergent between parents, but not for fathers and teachers. Mother and teacher share a common view of the child that is not shared with father, specifically a positive and significant association (although low) for SC (r = 0.18) and for AA (r = 0.26), meaning that when mothers over or underestimated child behavior (comparing to fathers) teachers do the same. The absolute values of the associations between method factors belonging to same method but different traits are significant for teachers (0.13 to 0.53), for mothers only relation between AW and AA were significant (0.69), these traits could be method biases, teachers and mothers who overestimate a child’s AW also overestimate AA. For teachers, correlations between SC and those two affective traits were negative, meaning that teachers who overestimate a child’s AW or AA also underestimate SC.

Measurement Invariance Between Boys and Girls

To test for MI across child’s sex, we performed a MG-CFA for each rater separately. There were same cross table zeros therefore we collapse few items’ categories (Higgins, 2004). For teacher, we collapse category 6 into 5 for item12 “inactive” (girls 0; boys 1), item23 “unnoticed” (girls 2; boys 0), item28 “defiant” (girls 1; boys 0), category 5 into 4 for item14 “isolated” (girls 0; boys 3), and category 2 into 3 for item30 “pleasure in accomplishments” (girls 0; boys 7). Since MG-CFA results were below cut point metric (ΔCFI = 0.001; ΔRMSEA = −0.001) and scalar (ΔCFI = −0.001; ΔRMSEA = 0.008) invariance was achieved. For mother, we collapse category 6 into 5 for item6 “hits” (girls 1; boys 0), item18 “conflict” (girls 1; boys 0), item4 “isolated” (girls 0; boys 3), and category 2 into 3 for item30 “pleasure in accomplishments” (girls 3; boys 0). Results were below cut point metric (ΔCFI = 0.001; ΔRMSEA = −0.003) and scalar (ΔCFI = −0.004; ΔRMSEA = −0.006) invariance was achieved. For father, we collapse category 6 into 5 for item9 “inhibited” (girls 0; boys 1), item18 “conflict” (girls 0; boys 2), category 5 into 4 for item14 “isolated” (girls 0; boys 1). Results were below the cut point metric (ΔCFI = −0.005; ΔRMSEA = 0.003) and scalar (ΔCFI = 0.001; ΔRMSEA = 0.005) invariance was achieved. Suggesting differences in the underlying latent trait rather than in the measure for all raters. Based on the establishment of the full scalar invariance across child sex, latent mean difference between SCBE-30 for the different raters was explore. Results showed that girls had significant higher SC than boys for all raters with a small effect for parents and a medium effect for teachers (Mother, Mgirls = 4.18, SDgirls = 0.69; Mboys = 4.03, SDgirls = 0.69, t(367) = −2.12, p < 0.05; Cohen’s d = −0.22) (Father, Mgirls = 4.15, SDgirls = 0.67; Mboys = 4.00, SDgirls = 0.64, t(367) = −2.23, p < 0.05; Cohen’s d = −0.23) (Teacher, Mgirls = 4.07, SDgirls = 0.82; Mboys = 3.65, SDgirls = 0.87, t(367) = −4.76, p < 0.001; Cohen’s d = −0.50). No differences were found regarding AW. For AA, results showed that boys had significant higher scores than girls for mothers and teachers, no differences were found considering fathers ratings (Mother, Mgirls = 1.99, SDgirls = 0.49; Mboys = 2.15, SDgirls = 0.50, t(367) = 3.13, p < 0.01; Cohen’s d = 0.33) (Teacher, Mgirls = 1.66, SDgirls = 0.76; Mboys = 1.97, SDgirls = 0.89, t(367) = 3.58, p < 0.001; Cohen’s d = 0.39).

Social Competence, Anger-Aggression, Anxiety-Withdrawal: Associations Within and Between Raters

For parents, we found a positive relation between AA and AW (mothers r = 0.33, ρ < 0.001; and fathers r = 0.30, ρ < 0.001) for teachers this relation was negative (r = −0.14, ρ < 0.01). Associations between AA and SC were negative for all raters (mothers r = −0.11, ρ < 0.05; fathers r = −0.17, ρ < 0.001: and teachers r = −0.46, ρ < 0.001). AW and SC were negative associated but only for teachers (r = −0.24, ρ < 0.001). Correlations between raters were positive between all raters for SC (0.20 to 0.61) and AW (0.12 to 0.49) specially between parents. For AA, we found positive associations between parents (r = 0.47, ρ < 0.001) and between mother and teachers (r = 0.19, ρ < 0.001). Both parents described children as more SC but also as more maladapted then teachers do, and fathers perceived child behavior as more AW than mothers do (see Table 3).


Table 3. Mean differences between raters considering SC, AW, and AA dimensions.


Our study presents important methodological contributions: child behavior was described by three raters, not only by their teacher but also by their parents (including father’s perspective); and we used CT-C(M−1) to compare and contrast raters.

Social Competence Behavior Evaluation questionnaire three-factor structure was analyzed taking all raters simultaneously, considering dependency of observations and ordinal nature of data. Factor structure remained the same, though some items were excluded (items 8, 6, 13, 1, and 2). Item8 “sad” was excluded due to normality problems, all raters described children as usually not sad or depressed, which is expectable in a non-clinical sample as ours. Sette et al. (2014) also excluded this item based on cross-loadings on both AW and AA. Item6 “worries” was excluded due to low loadings for all raters, values were unexpectedly high (specially for parents) and modification indices suggest a better fit on SC. This might be due to a translation issues, raters could be linking Portuguese word “preocupa-se” more in a sense of “being thoughtful,” which is more related to SC. Same word was used by Vasquez-Echeverria et al. (2016) and item was excluded due to low factor loading. Brazilian study (Brigas and Dessen, 2002) used “desasossegado,” item was also excluded as in other non-English studies (e.g., Butovskaya and Demianovitsch, 2002; Sette et al., 2014). Item13 “negotiates solutions to conflicts,” presented low factor loadings for all raters and modification indices suggest better fit on AA or AW. Raters could be reporting how frequently child is involved in conflicts rather than the ability to negotiate solutions with others (even if not frequently involved in conflicts). Vasquez-Echeverria et al. (2016) also excluded considering it distinct from the rest of SC items.

A strong agreement between both parents was found, and low agreement when comparing parents with teacher (for all SCBE-30 dimensions). Previous study by Klyce et al. (2011) that analyzed teacher’s and parent’s (92.8% mothers) ratings on SCBE-30, suggest that low agreement found between raters could be related with context, teachers might be concerned with disruptive behavior in classroom, whereas parents might have more opportunities to notice children coping positively when facing affective/emotional challenges. Our results showed that parents share a common view of child behavior that is not shared with teachers. Different opportunities, concerns, knowledge, expectations, and experiences could influence their perceptions. Parents observe qualitatively different behaviors and have greater familiarity with their children’s verbal and nonverbal cues in multiple contexts. Whereas, teachers only have one context, although multiple children to compare with, and more academic knowledge related to child development. A meta-analysis regarding behavioral/emotional problems (Achenbach et al., 1987) reported significant higher correlations for similar informants (e.g., mother-fathers) whereas ratings from different types of informants (e.g., parents, teachers) were less correlated. A more recent meta-analysis (Renk and Phares, 2004) regarding social competence reported modest average weighted effect size for both mothers-fathers and parents-teacher’s ratings.

Another interesting finding is the higher agreement between mother-teachers (comparing to father-teachers). Mother and teacher share a common view of child behavior that is not shared with father, whereas father-teacher do not share a common view besides the one that is shared with mother. This could be due to how schools include fathers, acting in a gender-type manner (Klinman, 1986) with teachers talk to about children manly with mothers. Also, typically fathers invest less time and effort (Torres et al., 2014) and might not have same opportunities to observe behaviors. It could also be related to individual differences in tolerance for various behaviors (Youngstrom et al., 2000), they might differ in perceiving occurrence or severity of behaviors. Or it could be related with gender bias since all teacher were females.

A recent meta-analysis (Rescorla et al., 2014) reported that parent–teacher agreement was higher for externalizing and attention problems than for internalizing. Our results suggest that parents have more difficulty on distinguishing internalizing and externalizing behaviors (especially fathers), associating higher AA exhibition with higher AW behaviors. Whereas teacher seems distinguish those behaviors and described children with higher SC scores as the ones who also presented less AW and AA behaviors. The ways in which raters regard social or problem behavior may contribute to rating differences. Previous study using parent, teacher rating, and observational data, found that correspondence between ratings and independent observations regarding problem behavior varied as a function of type of problem (internalizing/externalizing). Specifically, only parents’ ratings on internalization predicted observed isolation and withdrawal. Whereas, only teachers’ ratings on externalization predicted observed disobedient and aggressive actions (Hinshaw et al., 1992).

Our results suggest that differences between boys and girls are not due to measurement variance. Girls (more than boys) were described as more social competent, while boys (more than girls) were described as more aggressive. These results are consistent with literature (LaFreniere et al., 2002; Diener and Kim, 2004; Torres et al., 2014; Vasquez-Echeverria et al., 2016). Parents and teachers may be more aware of boys’ misbehavior, and more tolerant with girls, they could expect boys having more problem behaviors (Berg-Nielsen et al., 2012) and girls to display more socially competent behaviors (Birch and Ladd, 1998; Coolahan et al., 2000).

We recognize some study limitations. Analysis was based in parents and teacher perceptions of children’s social behavior rather than in direct observation, which may yield different interpretations. Additional bias factor could be present and were not controlled (e.g., fatigue, response bias, or contrast effects since parents rated only one child while teachers rated more children). Our sample presented 6% of multivariate outliers (Mahalanobis distance by χ2 (29) = 58.30, p < 0.001). Also, an exploratory structural equation modeling (ESEM) could represent better option since CFA might fail to meet standards of good measurement (e.g., goodness-of-fit, MI, and well-differentiated factors) (Marsh et al., 2020). It should be noted that for our sample (although for a short number of items) a collapsing categories technique has used to test for MI across child’s sex, replication is needed to examine the robustness our findings.

Future research could benefit from multilevel analysis with teachers’ ratings nested within classrooms and also explore in more detail discrepancy between raters (e.g., items analysis) to identify specific behaviors or contexts. It is important to consider how discrepancy between parents and teacher impact their communication and how it could affect children.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by the ISPA Ethics Committee. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

MV, AS, and MF: conception of the work. MA, CF, MF, and LM: data collection. MF: data analysis and drafting the manuscript. MF, MV, AS, and BV: data interpretation and edit the manuscript. All authors read and commented on the manuscript.


This work was supported by the Portuguese Foundation for Science and Technology (FCT) (FCT-PTDC/MHC-PED/0838/2014 and UIDB/04810/2020).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


The authors are grateful for the participation and support from students and teachers who have welcomed researchers into their classrooms. They also thank all the members of the research team for their assistance and support.

Supplementary Material

The Supplementary Material for this article can be found online at:


Achenbach, T. M., McConaughy, S. H., and Howell, C. T. (1987). Child/adolescent behavioral and emotional problems: implications of cross-informant correlations for situational specificity. Psychol. Bull. 101, 213–232. doi: 10.1037/0033-2909.101.2.213

CrossRef Full Text | Google Scholar

Bárrig, P., and Parco, D. (2017). Temperamento y competencia social en niños y niñas preescolares de San Juan de Lurigancho: un estudio preliminar. Liberabit 23, 75–88.

Google Scholar

Berg-Nielsen, T. S., Solheim, E., Belsky, J., and Wichstrom, L. (2012). Preschoolers’ psychosocial problems: in the eyes of the beholder? Adding teacher characteristics as determinants of discrepant parent-teacher reports. Child Psychiatry Hum. Dev. 43, 393–413. doi: 10.1007/s10578-011-0271-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Birch, S. H., and Ladd, G. W. (1998). Children’s interpersonal behaviors and the teacher-child relationship. Dev. Psychol. 34, 934–946. doi: 10.1037/0012-1649.34.5.934

PubMed Abstract | CrossRef Full Text | Google Scholar

Blair, K., Denham, S., Kochanoff, A., and Whipple, B. (2004). Playing it cool: temperament, emotion regulation, and social behavior in preschoolers. J. Sch. Psychol. 42, 419–443. doi: 10.1016/j.jsp.2004.10.002

CrossRef Full Text | Google Scholar

Brigas, M., and Dessen, M. (2002). Social competence and behavior evaluation in brazilian preschoolers. Early Educ. Dev. 13, 139–152. doi: 10.1207/s15566935eed1302_2

CrossRef Full Text | Google Scholar

Brislin, R. (1980). “Translation and content analysis for oral and written material,” in Handbook of Cross-Cultural Psychology, Vol. 2, eds H. Triandis and J. Berry, (Needham Heights, MA: Allyn and Bacon), 389–444.

Google Scholar

Brosseau-Liard, P., Savalei, V., and Li, L. (2012). An Investigation of the sample performance of two non-normality corrections for RMSEA. Multivar. Beahav. Res. 47, 904–930. doi: 10.1080/00273171.2012.715252

PubMed Abstract | CrossRef Full Text | Google Scholar

Butovskaya, M. L., and Demianovitsch, A. N. (2002). Social competence and behavior evaluation (SCBE-30) and socialization values (SVQ): russian children ages 3 to 6 years. Early Educ. Devel. 13, 153–170. doi: 10.1207/s15566935eed1302_3

CrossRef Full Text | Google Scholar

Campbell, D. T., and Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychol. Bull. 56, 81–105. doi: 10.1037/h0046016

CrossRef Full Text | Google Scholar

Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Struct. Equ. Model. 14, 464–504. doi: 10.1080/10705510701301834

CrossRef Full Text | Google Scholar

Chen, Q., and Jiang, Y. (2002). Social competence and behavior problems in Chinese preschoolers. Early Educ. Dev. 13, 171–186. doi: 10.1207/s15566935eed1302_4

CrossRef Full Text | Google Scholar

Coolahan, K. C., Fantuzzo, J., Mendez, J., and McDermott, P. (2000). Preschool peer interactions and readiness to learn: relationships between classroom peer play and learning behaviors and conduct. J. Educ. Psychol. 92, 458–465. doi: 10.1037/0022-0663.92.3.458

CrossRef Full Text | Google Scholar

Diener, M. L., and Kim, D. Y. (2004). Maternal and child predictors of preschool children’s social competence. J. Appl. Dev. Psychol. 25, 3–24. doi: 10.1016/j.appdev.2003.11.006

CrossRef Full Text | Google Scholar

Dumas, J. E., Arriaga, X. B., Begle, A., and Longoria, Z. N. (2011). Child and parental outcomes of a group parenting intervention for Latino families: a pilot study of the CANNE program. Cultur. Divers. Ethnic. Minor. Psychol 17, 107–115. doi: 10.1037/a0021972

PubMed Abstract | CrossRef Full Text | Google Scholar

Eid, M. (2000). A multitrait-multimethod model with minimal assumptions. Psychometrika 65, 241–261. doi: 10.1007/BF02294377

CrossRef Full Text | Google Scholar

Eid, M., Fridjof, W., Nussebeck, F. W., Geiser, C., Goolwitzer, M., Cole, D., et al. (2008). Structural equation modeling of multitrait–multimethod data: different models for different types of methods. Psychol. Methods 13, 230–253. doi: 10.1037/a0013219

PubMed Abstract | CrossRef Full Text | Google Scholar

Eid, M., Lischetzke, T., and Nussebeck, F. W. (2006). “Structural equation models for multitrait-multimethod data,” in Handbook of Multimethod Measurement in Psychology, eds M. Eid and E. Diener, (Washington, DC: American Psychology Association).

Google Scholar

Eid, M., Lischetzke, T., Nussebeck, F. W., and Trierweiler, L. (2003). Separating trait effects from trait-specific method effects in multitrait–multimethod models: a multiple-indicator CT-C(M–1) model. Psychol. Methods 8, 38–60. doi: 10.1037/1082-989X.8.1.38

PubMed Abstract | CrossRef Full Text | Google Scholar

Flora, D., and Curran, P. (2004). An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychol. Methods 9, 466–491. doi: 10.1037/1082-989X.9.4.466

PubMed Abstract | CrossRef Full Text | Google Scholar

Geiser, C., Burns, L., and Servera, M. (2014). Testing for measurement invariance and latent mean differences across methods: interesting incremental information from multitrait-multimethod studies. Front. Psychol. 5:1216. doi: 10.3389/fpsyg.2014.01216

PubMed Abstract | CrossRef Full Text | Google Scholar

Geiser, C., Eid, M., and Nussbeck, F. W. (2008). On the meaning of the latent variables in the CT-C(M–1) model: a comment on Maydeu-Olivares and Coffman (2006). Psychol. Methods 13, 49–57. doi: 10.1037/1082-989X.13.1.49

PubMed Abstract | CrossRef Full Text | Google Scholar

Geiser, C., Eid, M., West, S. G., Lischetzke, T., and Nussbeck, F. W. (2012). A comparison of method effects in two confirmatory factor models for structurally different methods. Struct. Equ. Model 19, 409–436. doi: 10.1080/10705511.2012.687658

CrossRef Full Text | Google Scholar

Hair, J. F., Anderson, R. E., Tatham, R. L., and Black, W. C. (1998). Multivariate Data Analysis, 5th Edn. Englewood Cliffs, NJ: Prentice-Hall.

Google Scholar

Higgins, J. (2004). Introduction to Modern Nonparametric Statistics. London: Thomson Learning.

Google Scholar

Hinshaw, S. P., Han, S. S., Erhardt, D., and Huber, A. (1992). Internalizing and externalizing behavior problems in preschool children: correspondence among parent and teacher ratings and behavior observations. J. Clin. Child Psychol. 21, 143–150. doi: 10.1207/s15374424jccp2102_6

PubMed Abstract | CrossRef Full Text | Google Scholar

Hu, L., and Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model. 6, 1–55. doi: 10.1080/10705519909540118

CrossRef Full Text | Google Scholar

Jorgensen, T. D., Pornprasertmanit, S., Schoemann, A. M., and Rosseel, Y. (2018). SemTools: Useful tools for structural equation modeling. R package version 0.5-1.

Google Scholar

Klinman, D. G. (1986). “Fathers and the educational system,” in The Father’s Role: Applied Perspectives, ed. M. E. Lamb, (New York, NY: Wiley), 413–428.

Google Scholar

Klyce, D., Conger, A., Conger, J., and Dumas, J. (2011). Measuring competence and dysfunction in preschool children: source agreement and component structure. J. Child Fam. Stud. 20, 503–510. doi: 10.1007/s10826-010-9417-0

CrossRef Full Text | Google Scholar

Konold, T. R., Walthall, J. C., and Pianta, R. C. (2004). The behavior of child behavior ratings: measurement structure of the child behavior checklist across time, informants, and child gender. Behav. Disord. 29, 372–383. doi: 10.1177/019874290402900405

CrossRef Full Text | Google Scholar

Kotler, J. C., and McMahon, R. J. (2002). Differentiating anxious, aggressive, and socially competent preschool children: validation of the Social Competence and Behavior Evaluation-30 (parent version). Behav. Res. Ther. 40, 947–959. doi: 10.1016/S0005-7967(01)00097-3

CrossRef Full Text | Google Scholar

LaFreniere, P., Masataka, N., Butovskaya, M., Chen, Q., Dessen, M. A., Atwanger, K., et al. (2002). Cross-cultural analysis of social competence and behavior problems in preschoolers. Early Educ. Dev. 13, 201–220. doi: 10.1207/s15566935eed1302_6

PubMed Abstract | CrossRef Full Text | Google Scholar

LaFreniere, P. J., and Dumas, J. E. (1996). Social competence and behavior evaluation in children ages 3 to 6 years: the short form (SCBE-30). Psychol. Assess. 8, 369–377. doi: 10.1037/1040-3590.8.4.369

CrossRef Full Text | Google Scholar

Lance, C., Noble, C., and Scullen, S. (2002). A critique of the correlated trait-correlated method and correlated uniqueness models for multitrait – multimethod data. Psychol. Methods 7, 228–244. doi: 10.1037//1082-989X.7.2.228

CrossRef Full Text | Google Scholar

Marsh, H., Guo, J., Dicke, T., Parker, P., and Craven, R. (2020). Confirmatory Factor Analysis (CFA), Exploratory Structural Equation Modeling (ESEM), and Set-ESEM: optimal balance between goodness of fit and parsimony. Multivar. Behav. Res. 55, 102–119. doi: 10.1080/00273171.2019.1602503

PubMed Abstract | CrossRef Full Text | Google Scholar

Masataka, N. (2002). Low anger-aggression and anxiety-withdrawal characteristic to preschoolers in Japanese society where ‘Hikikomori’ is becoming a major social problem. Early Educ. Dev. 13, 187–200. doi: 10.1207/s15566935eed1302_5

CrossRef Full Text | Google Scholar

Munzer, T., Miller, A., Brophy-Herb, H., Peterson, K. E., Horodynski, M. A., Contreras, D., et al. (2018). Characteristics associated with parent-teacher concordance on child behavior problem ratings in low-income preschoolers. Acad. Pediatr. 18, 452–459. doi: 10.1016/j.acap.2017.10.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Renk, K., and Phares, V. (2004). Cross-informant ratings of social competence in children and adolescents. Clin. Psychol. Rev. 24, 239–254. doi: 10.1016/j.cpr.2004.01.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Rescorla, L., Bochicchio, L., Achenbach, T., Ivanova, M. Y., Almqvist, F., Begovac, I., et al. (2014). Parent-teacher agreement on children’s problems in 21 societies. J. Clin. Child Adolesc. Psychol. 43, 627–642. doi: 10.1080/15374416.2014.900719

PubMed Abstract | CrossRef Full Text | Google Scholar

Reyes, A., and Kazdin, A. (2005). Informant discrepancies in the assessment of childhood psychopathology: a critical review, theoretical framework, and recommendations for further study. Psychol. Bull. 131, 483–509. doi: 10.1037/0033-2909.131.4.483

PubMed Abstract | CrossRef Full Text | Google Scholar

Rosseel, Y. (2012). Lavaan: an R package for structural equation modeling. J. Stat. Softw. 48, 1–36. doi: 10.1002/9781119579038.ch1

CrossRef Full Text | Google Scholar

Sette, S., Baumgartner, E., and MacKinnon, D. (2014). Assessing social competence and behavior problems in a sample of Italian preschoolers using the social competence and behavior evaluation scale. Early Educ. Dev. 26, 45–65. doi: 10.1080/10409289.2014.941259

CrossRef Full Text | Google Scholar

Torres, N., Verissimo, M., Monteiro, L., Ribeiro, O., and Santos, A. J. (2014). Domains of father involvement, social competence and problem behavior in preschool children. J. Fam. Stud. 20, 188–203. doi: 10.1080/13229400.2014.11082006

CrossRef Full Text | Google Scholar

Uher, J., Werner, C. S., and Gosselt, K. (2013). From observations of individual behaviour to social representations of personality: developmental pathways, attribution biases, and limitations of questionnaire methods. J. Res. Personal. 47, 647–667. doi: 10.1016/j.jrp.2013.03.006

CrossRef Full Text | Google Scholar

Vasquez-Echeverria, A., Rocha, T., Leite, J., Teixeira, P., and Cruz, O. (2016). Portuguese validation of the social competence and behavior evaluation scale (SCBE-30). Psicologia 29, 1–6. doi: 10.1186/s41155-016-0014-z

CrossRef Full Text | Google Scholar

Venet, M., Bigras, M., and Normandeau, S. (2002). Les qualités psychométriques du PSA-A. Can. J. Behav. Sci. 34, 163–167. doi: 10.1037/h0087168

CrossRef Full Text | Google Scholar

Winsler, A., and Wallace, G. L. (2002). Behavior problems and social skills in preschool children: parent-teacher agreement and relations with classroom observations. Early Educ. Dev. 13, 41–58. doi: 10.1207/s15566935eed1301_3

CrossRef Full Text | Google Scholar

Youngstrom, E., Loeber, R., and Stouthamer- Loeber, M. (2000). Patterns and correlates of agreement between parent, teacher, and male adolescent ratings of externalizing and internalizing problems. J. Consult. Clin. Psychol. 68, 1038–1050. doi: 10.1037//0022-006x.68.6.1038

CrossRef Full Text | Google Scholar

Yu, C. Y., and Muthén, B. (2002). “Evaluation of model fit indices for latent variable models with categorical and continuous outcomes,” in Proceedings of the Annual Meeting of the American Educational Research Association, New Orleans, LA.

Google Scholar

Zupancic, M., Gril, A., and Kavcic, T. (2000). The slovenian version of the social competence and behavior evaluation scale, preschool edition (OLSP): the second preliminary validation. Psiholoska obzorja 9, 7–23.

Google Scholar

Keywords: measurement invariance, correlated trait–correlated method minus one model, multiple informants, SCBE-30, social competence

Citation: Fernandes M, Santos AJ, Antunes M, Fernandes C, Monteiro L, Vaughn BE and Veríssimo M (2020) Convergent and Discriminant Validities of SCBE-30 Questionnaire Using Correlated Trait–Correlated Method Minus One. Front. Psychol. 11:571792. doi: 10.3389/fpsyg.2020.571792

Received: 11 June 2020; Accepted: 22 September 2020;
Published: 16 October 2020.

Edited by:

Cesar Merino-Soto, University of San Martín de Porres, Peru

Reviewed by:

Seongah Im, University of Hawaii at Manoa, United States
Carlos Santoyo, National Autonomous University of Mexico, Mexico

Copyright © 2020 Fernandes, Santos, Antunes, Fernandes, Monteiro, Vaughn and Veríssimo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Manuela Veríssimo,