Impressive Words: Linguistic Predictors of Public Approval of the U.S. Congress

What type of language makes the most positive impression within a professional setting? Is competent/agentic language or warm/communal language more effective at eliciting social approval? We examined this basic social cognitive question in a real world context using a “big data” approach—the recent record-low levels of public approval of the U.S. Congress. Using Linguistic Inquiry and Word Count (LIWC), we text analyzed all 123+ million words spoken by members of the U.S. House of Representatives during floor debates between 1996 and 2014 and compared their usage of various classes of words to their public approval ratings over the same time period. We found that neither agentic nor communal language positively predicted public approval. However, this may be because communion combines two disparate social motives (belonging and helping). A follow-up analysis found that the helping form of communion positively predicted public approval, and did so more strongly than did agentic language. Next, we conducted an exploratory analysis, examining which of the 63 standard LIWC categories predict public approval. We found that the public approval of Congress was highest when politicians used tentative language, expressed both positive emotion and anxiety, and used human words, numbers, prepositions, numbers, and avoided conjunctions and the use of second-person pronouns. These results highlight the widespread primacy of warmth over competence as the primary dimensions of social cognition.

communion dictionary predicted public approval, r(206) = .49, p < .001. We expected this result because we borrowed heavily from Hart's communion dictionary when we originally developed the prosocial words dictionary (in Frimer et al., 2014). We suspect that Hart's communion scores combine "helping" with "belonging" forms of communion. An analysis supports this notion: When prosocial words and Hart communion were entered together as predictors of public approval, Hart communion's unique contribution to public approval becomes negative, β = -1.04, p < .001, whereas prosocial words still uniquely and positively predicted approval, β = 1.56, p < .001. Hence, these results remain consistent with the interpretations we make in the manuscript-the American public prefers a Congress that talks about helping, but not one that talks about belonging.
We retested hypotheses 1 and 2 using Hart dictionary scores (rather than the coded ones). The results are largely similar to those using coded scores (see Supplementary Tables 1 and 2). Because Hart communion shares most of its variability with the prosocial words dictionary, both regressions display results that resemble table 3 from the manuscript. Prosocial words and Hart's communion dictionary both positively predict approval, even when controlling for outside variables. Helping language remains the key linguistic predictor of public approval. Unlike coded agency, Hart's agency has a small but significant negative correlation with public approval. This finding is not particularly pertinent to our research questions. Another difference between the two analyses occurs upon adding control variables to the model (model 3). In both models, the interaction between agency and communion/prosocial becomes just significant when controlling for other variables (see Supplementary figure 1). To describe the nature of the interaction, we recentered agency at 1 SD above and below the mean, re-ran the regression analysis, and plotted public approval at 1 SD above and below the means on communal or prosocial language. When Congress used highly agentic language, prosocial or communal language did not predict approval. However, when Congress' rhetoric was relatively lacking in agency, communal and prosocial language positively predicted public approval. While this diverges from the results reported in the manuscript, we note that interaction effect sizes are consistently small: |β| = .06 when coded; |β| = .10 when using Hart's dictionary. Moreover, the interaction between agency and communion was not a primary focus of the present research.
In summary, using Hart's agency and communion dictionaries, we found that "helping" language is robustly the strongest linguistic predictor of public approval of Congress. Agency is generally a weak negative predictor of public approval. These results are generally consistent with the interpretation we offer in the manuscript.

Does Congress' Composition Moderate the Predictors of Approval?
We examined whether the linguistic predictors of Congress depend on the demographic make-up of Congress. In particular, we investigated whether the public favors a Congress that exhibits stereotype-consistent behavior. For example, the public may approve of agentic males and communal females (Eagly, 1987), or agentic Republicans and communal Democratic Party members.
To examine these questions, we tested whether the usage of certain types of language (agency, communion, prosocial communion) interact with the composition of Congress (gender balance, partisan composition) to predict public approval. To foreshadow the results, we found several interactions, with some of them suggesting that the public favors stereotype confirming behavior but even more of them suggesting the opposite. These mixed results need to be interpreted with caution, however, insofar as gender balance are conflated with time in this data set.
We collected data on the composition of Congress from www.fec.gov. Between 1996-2014, Congress was comprised of 48.9% Democratic Party members on average (SD = 4.1%), and 18.6% female members on average (SD = 2.7%). Supplementary figure 2 shows the trajectories of gender and party composition of Congress over the period of study.
To test for interactions between linguistic categories and demographic variables, we conducted regression analyses, each of which predicted public approval. In each, we entered standardized agency, communion or prosocial words and one demographic category, along with their 2-way and 3way interaction terms. If interactions were significant, we decomposed them in follow-up analyses.

Gender  agentic and communal language.
We begin by examining whether agentic and communal language interacts with gender to predict pubic approval. Supplementary Table 3 presents the results. Supplementary figure 3 (both panels) display the strong main effect of gender: a small (5%) increase in female representation predicted a large (23%) decrease in public approval. We also found significant interactions between each linguistic category and gender, but no three-way interaction.
To describe the nature of the two-way interactions (between language and gender), we recentered the gender balance variable at 1 SD above (21% female) and below (16% female) the mean, re-ran the regression analysis, and plotted public approval at 1 SD above and below the means on language.
The left panel of Supplementary figure 2 shows that Congressional use of agentic language predicted public disapproval but only when Congress was especially dominated by males. And the right panel of Supplementary figure 2 shows that communal language negatively predicted public approval but only when Congress was less dominated by males. Simply put, the public disapproved of genderstereotyped language displays (agentic males and communal females). These results do not align with an existing literature that suggests that people respond with agitation and disapproval to stereotype disconfirming displays (e.g., Forster, Higgin & Werth, 2004). Caution is due when interpreting these findings because female representation increased monotonically over the time under study, r(206) = .98, p < .001. Hence, time and female representation are confounded in this analysis.

Gender  agentic and communal language.
Next, we examined whether agentic and the prosocial form of communal language interacts with gender to predict pubic approval. Supplementary Table 4 present the results of these analyses. We reproduced the same interaction between agency and gender as in the previous analysis. A novel finding in this analysis was an interaction between prosocial language and gender, such that prosocial language had a larger effect on public approval when Congress was relatively female-rich. This finding is consistent with the notion that people approve of stereotype-confirming behavior.

Language  Party Control Interactions
Next, we ran parallel analyses, only using party control (Democratic vs. Republican) instead of gender as the Congressional composition variable.

Party control  agentic and communal language.
Supplementary Table 5 presents the results of an analysis of language  party composition interactions. Aside from a main effect of communion (high communion predicts low approval), no effects reached significance.

Party control  agentic and prosocial language.
Supplementary Table 6 presents the results of an analysis of language  party composition interactions. We found an interaction between party composition and prosocial language, such that prosocial language was an especially strong predictor of public approval when Republicans controlled Congress (see Supplementary figure 3). This finding is again inconsistent with the notion that people approve of stereotype confirming behavior. We also found main effects of prosocial language (higher > lower) and for party composition (Republican > Democrat). The three-way interaction did not reach significance.

Supplementary Figures
Supplementary Figure 1. The effect of prosocial and communal language on public approval depends on the level of agentic language.