Your new experience awaits. Try the new design now and help us make it even better

EDITORIAL article

Front. Hum. Dyn., 05 December 2025

Sec. Digital Impacts

Volume 7 - 2025 | https://doi.org/10.3389/fhumd.2025.1746496

This article is part of the Research TopicThe Role of Artificial Intelligence in Everyday Well-BeingView all 9 articles

Editorial: The role of artificial intelligence in everyday well-being

  • 1Centre for Research and Innovation, Oulu University of Applied Sciences, Oulu, Finland
  • 2Department of Informatics, University of Economics in Katowice, Katowice, Poland

Artificial intelligence (AI) is increasingly influencing healthcare, mental health, and education, creating new opportunities for diagnosis, treatment, and personalized support. Besides these benefits, this Research Topic considers the complex effects of AI on individual wellbeing across various contexts and age groups. The contributions address vital issues such as privacy, trust, algorithmic bias, and widening digital and socioeconomic divides, while also investigating opportunities for the responsible integration of AI into everyday life.

The Research Topic emphasizes how AI impacts mental health interventions, patient trust, and engagement in educational and family contexts. It highlights psychological factors such as attitudes toward AI, anxiety, and behavioral intentions, as well as systemic issues like transparency and governance. By presenting strategies for harm reduction, trust-building, and ethical oversight, these studies offer valuable insights for shaping policy, guiding practice, and ensuring that AI functions as a tool for enhancing, rather than compromising, human wellbeing.

The studies in this Research Topic employ various methodologies, including qualitative interviews, feasibility studies, and advanced statistical modeling (structural equation modeling, SEM, and artificial neural networks, ANN) to investigate user attitudes, engagement, and behavioral factors related to AI. Conceptual contributions, such as opinion and perspective articles, address ethical, trust, and governance issues, while empirical research explores psychological determinants, gender differences, and educational impacts through cross-sectional and mediation analyses. Additionally, a bibliometric analysis tracks long-term trends in AI and mental health research, complementing intervention-focused studies that assess practical applications, such as voice assistants and generative AI tools, in real-world settings.

The articles in this Research Topic collectively examine how AI impacts health, education, and wellbeing, highlighting both opportunities and risks. In healthcare settings, a qualitative study finds that patients consider ChatGPT a helpful tool for accessing oncological information. However, concerns about privacy and the irreplaceable role of human doctors emphasize the need for clear guidelines (Durosini et al.). Similarly, an opinion article highlights that AI's success in mental healthcare relies on fostering justified trust through transparency, patient education, and human oversight (Bach and Männikkö). At the same time, a perspective article emphasizes the importance of governance frameworks and accountability in mitigating the ethical risks associated with algorithmic decision-making (Cheong).

Several studies examine the role of AI in mental health and education. Research on generative AI art therapy finds that adolescents' engagement is motivated by perceived trust, enjoyment, and intuitive design, suggesting strategies such as gamification to boost participation (Peng et al.). Another study indicates that academic engagement enhances mental health among Chinese college students, with confidence and behavioral intention toward AI use serving as important mediators (Wang and Wang). A feasibility study on a voice assistant for parents shows potential for supporting children's mental health, although technical complexity and privacy concerns remain obstacles to adoption (Richmond et al.).

Finally, two articles, examine broader social and systemic factors. One study finds that women report higher AI anxiety and lower positive attitudes compared to men, revealing persistent gender gaps in AI adoption (Russo et al.). A bibliometric analysis documents a sharp increase in AI-related mental health research over two decades, with machine learning dominating applications for risk prediction and personalized interventions, while highlighting ongoing challenges around data quality, privacy, and bias (Chen et al.).

The contributions in this Research Topic collectively emphasize the dual nature of AI in shaping everyday wellbeing: AI offers transformative opportunities in healthcare, education, and mental health support. At the same time, it presents complex ethical, psychological, and social challenges. Across studies, themes of trust, transparency, and user attitudes emerge as key factors for successful integration, alongside concerns about privacy, misuse, and equity. Original research highlights practical applications, from ChatGPT in oncology to generative AI in therapy and voice assistants for parenting, while also underscoring ongoing gaps such as gender disparities and governance needs. Notably, the perspective on transparency and accountability has garnered substantial attention, reflecting the growing recognition that robust governance and clear communication are crucial for safeguarding wellbeing in the era of AI (Cheong). Together, these insights call for interdisciplinary collaboration, robust regulatory frameworks, and user-centered design strategies to ensure that AI adoption promotes safety, inclusivity, and sustainable wellbeing.

Author contributions

NM: Writing – original draft, Writing – review & editing. AS: Writing – review & editing, Writing – original draft.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Keywords: artificial intelligence, generative AI, algorithmic bias, wellbeing, transparency

Citation: Männikkö N and Strzelecki A (2025) Editorial: The role of artificial intelligence in everyday well-being. Front. Hum. Dyn. 7:1746496. doi: 10.3389/fhumd.2025.1746496

Received: 14 November 2025; Revised: 25 November 2025; Accepted: 26 November 2025;
Published: 05 December 2025.

Edited and reviewed by: Peter David Tolmie, University of Siegen, Germany

Copyright © 2025 Männikkö and Strzelecki. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Niko Männikkö, bWFubmlra29uQGdtYWlsLmNvbQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.