Your new experience awaits. Try the new design now and help us make it even better

EDITORIAL article

Front. Hum. Dyn., 07 October 2025

Sec. Digital Impacts

Volume 7 - 2025 | https://doi.org/10.3389/fhumd.2025.1706740

This article is part of the Research TopicChatbots as Humanlike Text Generators: Friend or Foe?View all 6 articles

Editorial: Chatbots as humanlike text generators: friend or foe?

  • 1Departement Bestuurs- en Organisatiewetenschap, Universiteit Utrecht, Utrecht, Netherlands
  • 2Communication Department, National University of Political Studies and Public Administration, Bucharest, Romania

The field of artificial intelligence, particularly in the realm of natural language processing, has seen significant advancements with the development of chatbots like ChatGPT. These AI-driven text generators have been increasingly utilized across diverse sectors such as education, science, law, and health, offering users a novel way to access information and assistance. Despite their growing popularity, there remains a paucity of empirical research examining the real-world impact of these tools on users. Key questions persist regarding how individuals perceive the utility of chatbots, the nature of their interactions, and whether these digital entities are seen as allies or adversaries. While existing studies have employed SWOT analyses to explore the strengths and weaknesses of chatbots (e.g., Farrokhnia et al., 2024), and others have delved into their conversational dynamics and ethical implications (e.g., Loos et al., 2023), there is a notable gap in understanding the user experience and the broader societal implications of chatbot integration.

The contributions in this Research Topic explore the nuanced relationship between users and chatbots, to uncover the factors that influence user perceptions and behaviors, and to determine whether chatbots are viewed as beneficial companions or potential threats. This Research Topic includes the following five contributions, covering diverse users (patients, caregivers, and clinicians, physicians, nurses, regulated mental health professionals, educators, students), various chatbot user domains (health, education and law), and countries (Cannada, Germany, the Netherlands, Saoudi Arabia, USA):

1. “Critical conversations: a user-centric approach to chatbots for history taking in the pediatric intensive care unit,” by Collins et al. aims to explore the use of chatbots in pediatric medical settings, focusing on the need for emergent interventions in intensive care units. This works suggest a critical view on how to include users, such as patients, caregiver and clinicians to generate diagnostic reasoning and mitigate false information. Challenges of using chatbots in intensive medical care facilities are discussed and solutions for improvements are proposed.

2. “Experiment with ChatGPT: methodology of first simulation,” by Shvets et al. indicate an experimental way to evaluate ChatGPT outputs in different assignment topics as compared with supervisor-provided feedback regarding answers' clarity, depth and relevance. Using students as participants, this work proves that both types of feedback were positively perceived, with a slight preference for the ChatGPT answers regarding their perceived ability to understand the topic of the given assignments. This contribution sheds light on the potential advantages of AI-generated feedback in educational settings, especially when contributing to topic clarification and when personalized feedback from supervisors is unavailable or limited.

3. ”Exploring health professionals' views on the depiction of conversational agents as health professionals: A qualitative descriptive study,” by MacNeill et al. focuses on the role of conversational AI in health settings, showing the advantages and challenges of using conversational agents in the health care system in general. The results are valuable both for users and developers in offering further guidance and recommendations.

4. ”Free word association analysis of students' perception of artificial intelligence,” by Henrich et al. is focused on students' perceptions of AI using semantic concept association. The authors explore how AI concepts are clustered and apparent in students' representations and what immediate applications they envisage for different assistance systems. The article critically examines the role of AI in the educational environment, the practical understanding of some very abstract terms at a student level, and offers reflections of how to facilitate AI literacy in schools and universities.

5. “Intention to Use ChatGPT among Law Educators in Saudi Arabia,” by Sarabdeen draws attention to another challenging topic, the use of common AI systems, such as ChatGPT, in professional domains, particularly ones with political implications, such as law education.The The authors explore the performance and effort expectations as well as facilitating conditions and behavioral intentions associated with adoption and use. The work draws attention to the importance of creating strong policies to regulate the proper acceptance and use of ChatGPT and similar AI applications by law educators and other key domains. The research also discusses the need for proper training for law educators, particularly in countries with a relatively recent history of technology use.

This Research Topic offers key insights into the realm of user-chatbot interactions and their potential benefits for users, developers and policy makers. These relate to both opportunities and drawbacks. Based on the findings across all five studies presented here, we conclude that more longitudinal empirical studies in a variety of contexts and countries are needed. Chatbots, as humanlike text generators, might be considered as friends, foes, or both, and their pervasiveness in all spheres of social and professional life is made apparent in the works of the researchers included in this Research Topic.

Author contributions

EL: Writing – original draft. LI: Writing – review & editing.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Farrokhnia, M., Banihashem, S. K., Noroozi, O., and Wals, A. (2024). A SWOT analysis of ChatGPT: implications for educational practice and research. Innov. Educ. Teach. Int. 61, 460–474. doi: 10.1080/14703297.2023.2195846

Crossref Full Text | Google Scholar

Loos, E., Gröpler, J., and Goudeau, M. L. S. (2023). Using ChatGPT in education: human reflection on ChatGPT's self-reflection. Societies 13:196. doi: 10.3390/soc13080196

Crossref Full Text | Google Scholar

Keywords: chatbots, GenAI, user perspective, education, health, opportunities, threats

Citation: Loos E and Ivan L (2025) Editorial: Chatbots as humanlike text generators: friend or foe?. Front. Hum. Dyn. 7:1706740. doi: 10.3389/fhumd.2025.1706740

Received: 16 September 2025; Accepted: 23 September 2025;
Published: 07 October 2025.

Edited and reviewed by: Peter David Tolmie, University of Siegen, Germany

Copyright © 2025 Loos and Ivan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Eugène Loos, ZS5mLmxvb3NAdXUubmw=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.