AUTHOR=Chu Bianca , Modi Natansh D. , Menz Bradley D. , Bacchi Stephen , Kichenadasse Ganessan , Paterson Catherine , Kovoor Joshua G. , Ramsey Imogen , Logan Jessica M. , Wiese Michael D. , McKinnon Ross A. , Rowland Andrew , Sorich Michael J. , Hopkins Ashley M. TITLE=Generative AI’s healthcare professional role creep: a cross-sectional evaluation of publicly accessible, customised health-related GPTs JOURNAL=Frontiers in Public Health VOLUME=Volume 13 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2025.1584348 DOI=10.3389/fpubh.2025.1584348 ISSN=2296-2565 ABSTRACT=IntroductionGenerative artificial intelligence (AI) is advancing rapidly; an important consideration is the public’s increasing ability to customise foundational AI models to create publicly accessible applications tailored for specific tasks. This study aims to evaluate the accessibility and functionality descriptions of customised GPTs on the OpenAI GPT store that provide health-related information or assistance to patients and healthcare professionals.MethodsWe conducted a cross-sectional observational study of the OpenAI GPT store from September 2 to 6, 2024, to identify publicly accessible customised GPTs with health-related functions. We searched across general medicine, psychology, oncology, cardiology, and immunology applications. Identified GPTs were assessed for their name, description, intended audience, and usage. Regulatory status was checked across the U.S. Food and Drug Administration (FDA), European Union Medical Device Regulation (EU MDR), and Australian Therapeutic Goods Administration (TGA) databases.ResultsA total of 1,055 customised, health-related GPTs targeting patients and healthcare professionals were identified, which had collectively been used in over 360,000 conversations. Of these, 587 were psychology-related, 247 were in general medicine, 105 in oncology, 52 in cardiology, 30 in immunology, and 34 in other health specialties. Notably, 624 of the identified GPTs included healthcare professional titles (e.g., doctor, nurse, psychiatrist, oncologist) in their names and/or descriptions, suggesting they were taking on such roles. None of the customised GPTs identified were FDA, EU MDR, or TGA-approved.DiscussionThis study highlights the rapid emergence of publicly accessible, customised, health-related GPTs. The findings raise important questions about whether current AI medical device regulations are keeping pace with rapid technological advancements. The results also highlight the potential “role creep” in AI chatbots, where publicly accessible applications begin to perform — or claim to perform — functions traditionally reserved for licensed professionals, underscoring potential safety concerns.