Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. AI for Human Learning and Behavior Change

This article is part of the Research TopicNew Trends in AI-Generated Media and SecurityView all 11 articles

What you see is not what you get anymore: A mixed-methods approach on human perception of AI-generated images

Provisionally accepted
Malte  HögemannMalte Högemann1,2*Jonas  BetkeJonas Betke1Oliver  ThomasOliver Thomas1,2
  • 1Information Management and Business Informatics, Universitat Osnabruck, Osnabrück, Germany
  • 2Smart Enterprise Engineering, Deutsches Forschungszentrum fur Kunstliche Intelligenz GmbH Standort Niedersachsen, Osnabrück, Germany

The final, formatted version of the article will be published soon.

The rapid development of text-to-image (TTI) models is destabilizing the cultural role of photography as evidence of reality. This study explores how individuals perceive and negotiate authenticity when confronted with AI-generated images beyond portraits, such as landscapes, interiors, and architecture. Using a mixed-methods design, we examine not only recognition performance but also the strategies, doubts, and interpretive cues people use when facing synthetic visuals by combining classification experiments with qualitative analyses of participants' justifications. Our findings reveal the persistence of certain visual artifacts and the growing difficulty of detecting images produced by advanced models such as FLUX.1-dev. More importantly, our findings reveal patterns of overconfidence, demographic differences and recurring forms of reasoning. These insights contribute to broader societal debates on disinformation, the epistemic uncertainty of digital culture, and the erosion of shared visual realities. We argue that technical detection systems must be complemented by sociotechnical approaches, such as media literacy, cultural awareness, and regulatory frameworks, to sustain public trust in visual evidence. By connecting empirical data with the cultural politics of authenticity, this paper contributes to interdisciplinary debates on how generative AI reshapes the relationship between images, knowledge and society.

Keywords: Generative AI, disinformation, Deepfakes, Synthetic images, Authenticity, photorealism, AI & Society

Received: 17 Sep 2025; Accepted: 04 Nov 2025.

Copyright: © 2025 Högemann, Betke and Thomas. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Malte Högemann, malte.hoegemann@uni-osnabrueck.de

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.