ORIGINAL RESEARCH article

Front. Psychol.

Sec. Cognitive Science

Divergent Patterns of Probabilistic Reasoning in Humans and GPT-5

  • 1. City St George's University of London, London, United Kingdom

  • 2. Psychology Human Sciences, City University of London, London, United Kingdom, EC1V 0HB

  • 3. University of Plymouth, Plymouth, United Kingdom

The final, formatted version of the article will be published soon.

Abstract

Large Language Models (LLMs) such as GPT-5 are increasingly consulted for advice on a range of issues, yet little is known about the structural profile of their probabilistic judgments compared with humans. This study examined GPT-5's adherence to classical probability rules, focusing on conjunction fallacies, disjunction fallacies, and binary complementarity. We employed a large dataset on human probabilistic judgments (Huang et al., 2025), where it was shown that human participants displayed a range of fallacies, including conjunction and disjunction fallacies (as well as double such fallacies) and violations of binary complementarity. Testing GPT-5 on the same probabilistic task as in Huang et al. (2025) and using the same participant profiles, we found that, GPT-5 produced only single conjunction or disjunction fallacies and exhibited near-perfect compliance with the binary complementarity constraints. GPT-5's profile aligns closely with predictions of early quantum-probabilistic models (Busemeyer et al., 2011), rather than the more recent ones incorporating noise (Huang et al., 2025). These findings indicate that GPT-5 implements a more coherent and internally consistent form of probabilistic reasoning compared to naïve participants. Keywords: Large Language Models (LLMs), AI participants (AI subjects), GPT-5, probabilistic reasoning, conjunction fallacy, disjunction fallacy, complementarity, quantum-probabilistic models, human vs. AI cognition

Summary

Keywords

AI participants (AI subjects), complementarity, Conjunction fallacy, Disjunction fallacy, GPt-5, human vs. AI cognition, Large Language Models (LLMs), Probabilistic reasoning

Received

06 January 2026

Accepted

17 February 2026

Copyright

© 2026 Imannezhad, Pothos and Wills. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Emmanuel Pothos

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Share article

Article metrics