Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. Machine Learning and Artificial Intelligence

Large Language Models as Cognitive Shortcuts: A Systems-Theoretic Reframing Beyond Bullshit

Provisionally accepted
  • Bern University of Applied Sciences, Bern, Switzerland

The final, formatted version of the article will be published soon.

Introduction: Large Language Models (LLMs) are often framed through metaphors such as "bullshit" or "stochastic parrots," emphasizing missing grounding, belief, or intention. While rhetorically powerful, these framings obscure how LLMs are used for sense-making, ideation, and communication. We reframe LLMs as Operators for General Cognitive Shortcuts (GECOS) within techno-semiotic assemblages. Methods: We develop a functional model by integrating concepts from Luhmannian systems theory, Deleuzian ontology, and minimally from Husserlian phenomenology. Using conceptual analysis as functional–comparative synthesis, we analyze human–LLM interaction without attributing agency, belief, or understanding to the model. Results: GECOS explains LLM usefulness as communicative complexity reduction: models generate connectable continuations by approximating second-order expectations ("what is expected to be expected"), enabling interactional continuity without reference to truth or intention. Via Luhmann's contingency formula, LLMs help users navigate uncertainty through procedurally plausible coherence. Discussion: The framework shifts attention from ontological debates about "understanding" to the operational role of LLMs in distributed sense-making. It also highlights risks: overreliance, emotional projection, and normative flattening when connectability substitutes for justification. Conclusion: GECOS offers a non-anthropomorphic alternative to deficit metaphors by modeling LLMs as pragmatic operators that sustain communicative momentum and enable workable continuations in complex socio-technical environments.

Keywords: bullshit, ChatGPT, Deleuze, LLMS, Luhmann, Simulacra

Received: 07 Aug 2025; Accepted: 22 Jan 2026.

Copyright: © 2026 Sariyar. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Murat Sariyar

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.