Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Lang. Sci.

Sec. Psycholinguistics

This article is part of the Research TopicInsights in Psycholinguistics: 2025View all 14 articles

Integrating Language Model Embeddings into the ACT-R Cognitive Modeling Framework

Provisionally accepted
  • 1Helmholtz Zentrum Munchen Computational Health Center, Neuherberg, Germany
  • 2University of California Los Angeles, Los Angeles, United States
  • 3Department of Computer Science, Saarland University, Saarbrücken, Germany
  • 4Department of Language Science and Technology, Saarland University, Saarbrücken, Germany

The final, formatted version of the article will be published soon.

In 2025, psycholinguistic research has the benefit of large, high-quality datasets of human behavior, and massively-scalable metrics for variables of interest like frequency and association. This means we have more data than ever before to shed light on classic language processing phenomena like associative priming. But in order to build and test rigorous theories against this data, we also need computational modeling tools that can simulate cognitive mechanisms and generate quantitative predictions at the same scale. In this paper, we assemble one such case, adapting the ACT-R cognitive modeling framework to make use of association metrics derived from language model embeddings, in service of a scalable model of associative priming in the Lexical Decision Task. ACT-R implements a model of memory retrieval that can use itemwise predictors like frequency and association to predict task response times (RTs), via interpretable and meaningfully-parameterized components like spreading activation. But currently, ACT-R's spreading activation calculations rely on manually-coded similarity scores, which are labor-intensive and prone to inaccuracies, particularly for large vocabularies. In this study, we replace these hand-coded associations with cosine similarity scores derived from Word2Vec and BERT embeddings, thereby improving both scalability and predictive accuracy while retaining ACT-R's interpretability. We compare various versions of our model against observed human RTs from the Semantic Priming Project dataset, observing impressive item-wise prediction accuracy, and achieving the strongest alignment with a model where spreading activation is penalized via a scalable approximation of the classic 'fan effect.' These findings provide a proof of concept for integrating embedding-based representations into algorithmic-level models of language processing. More than an insight into models of priming, we see this as a first step towards scalable and specific models of more complex phenomena.

Keywords: ACT-R, Associative priming, Cognitive Modeling, distributional semantics, Language models, Psycholinguistics

Received: 09 Oct 2025; Accepted: 23 Jan 2026.

Copyright: © 2026 Meghdadi, Duff and Demberg. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Maryam Meghdadi

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.