ORIGINAL RESEARCH article

Front. Comput. Neurosci.

Volume 19 - 2025 | doi: 10.3389/fncom.2025.1474860

This article is part of the Research TopicThe Convergence of AI, LLMs, and Industry 4.0: Enhancing BCI, HMI, and Neuroscience ResearchView all 3 articles

Analysis of Argument Structure Constructions in a Deep Recurrent Language Model

Provisionally accepted
  • University of Erlangen Nuremberg, Erlangen, Germany

The final, formatted version of the article will be published soon.

Understanding how language and linguistic constructions are processed in the brain is a fundamental question in cognitive computational neuroscience. This study builds directly on our previous work analyzing Argument Structure Constructions (ASCs) in the BERT language model, extending the investigation to a simpler, brain-constrained architecture: a recurrent neural language model. Specifically, we explore the representation and processing of four ASCs-transitive, ditransitive, caused-motion, and resultative-in a Long Short-Term Memory (LSTM) network.We trained the LSTM on a custom GPT-4-generated dataset of 2,000 syntactically balanced sentences. We then analyzed the internal hidden layer activations using Multidimensional Scaling (MDS) and t-Distributed Stochastic Neighbor Embedding (t-SNE) to visualize sentence representations. The Generalized Discrimination Value (GDV) was calculated to quantify cluster separation. Our results show distinct clusters for the four ASCs across all hidden layers, with the strongest separation observed in the final layer.These findings are consistent with our earlier study based on a large language model and demonstrate that even relatively simple RNNs can form abstract, construction-level representations. This supports the hypothesis that hierarchical linguistic structure can emerge through prediction-based learning. In future work, we plan to compare these model-derived representations with neuroimaging data from continuous speech perception, further bridging computational and biological perspectives on language processing.

Keywords: cognitive computational neuroscience, argument structure constructions, linguistic constructions (CXs), Recurrent neural networks (RNNs), LSTMs, Sentence Representation, computational linguistics, natural language processing (NLP)

Received: 02 Aug 2024; Accepted: 20 May 2025.

Copyright: © 2025 Ramezani, Schilling and Krauss. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Patrick Krauss, University of Erlangen Nuremberg, Erlangen, Germany

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.