AUTHOR=Ramezani Pegah , Schilling Achim , Krauss Patrick TITLE=Analysis of argument structure constructions in a deep recurrent language model JOURNAL=Frontiers in Computational Neuroscience VOLUME=Volume 19 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2025.1474860 DOI=10.3389/fncom.2025.1474860 ISSN=1662-5188 ABSTRACT=Understanding how language and linguistic constructions are processed in the brain is a fundamental question in cognitive computational neuroscience. This study builds directly on our previous work analyzing Argument Structure Constructions (ASCs) in the BERT language model, extending the investigation to a simpler, brain-constrained architecture: a recurrent neural language model. Specifically, we explore the representation and processing of four ASCs–transitive, ditransitive, caused-motion, and resultative–in a Long Short-Term Memory (LSTM) network. We trained the LSTM on a custom GPT-4-generated dataset of 2,000 syntactically balanced sentences. We then analyzed the internal hidden layer activations using Multidimensional Scaling (MDS) and t-Distributed Stochastic Neighbor Embedding (t-SNE) to visualize sentence representations. The Generalized Discrimination Value (GDV) was calculated to quantify cluster separation. Our results show distinct clusters for the four ASCs across all hidden layers, with the strongest separation observed in the final layer. These findings are consistent with our earlier study based on a large language model and demonstrate that even relatively simple RNNs can form abstract, construction-level representations. This supports the hypothesis that hierarchical linguistic structure can emerge through prediction-based learning. In future work, we plan to compare these model-derived representations with neuroimaging data from continuous speech perception, further bridging computational and biological perspectives on language processing.