AUTHOR=Petersen Dawson , Almor Amit TITLE=Agentive linguistic framing affects responsibility assignments toward AIs and their creators JOURNAL=Frontiers in Psychology VOLUME=Volume 16 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1498958 DOI=10.3389/fpsyg.2025.1498958 ISSN=1664-1078 ABSTRACT=Tech companies often use agentive language to describe their AIs (e.g., The Google Blog claims that, “Gemini can understand, explain and generate high-quality code,”). Psycholinguistic research has shown that violating animacy hierarchies by putting a nonhuman in this agentive subject position (i.e., grammatical metaphor) influences readers to perceive it as a causal agent. However, it is not yet known how this affects readers’ responsibility assignments toward AIs or the companies that make them. Furthermore, it is not known whether this effect relies on psychological anthropomorphism, or a more limited set of linguistic causal schemas. We investigated these questions by having participants read a short vignette in which “Dr. AI” gave dangerous health advice in one of two framing conditions (AI as Agent vs. AI as Instrument). Participants then rated how responsible the AI, the company, and the patients were for the outcome, and their own AI experience. We predicted that participants would assign more responsibility to the AI in the Agent condition, and that lower AI experience participants would assign higher responsibility to the AI because they would be more likely to anthropomorphize it. The results confirmed these predictions; we found an interaction between linguistic framing condition and AI experience such that lower AI experience participants assigned higher responsibility to the AI in the Agent condition than in the Instrument condition (z = 2.13, p = 0.032) while higher AI experience participants did not. Our findings suggest that the effects of agentive linguistic framing toward non-humans are decreased by domain experience because it decreases anthropomorphism.