- 1Universidad de Guadalajara—Centro Universitario Los Altos, Tepatitlán de Morelos, Mexico
- 2Universidad Tecnica de Machala, Machala, Ecuador
- 3Universidad Técnica de Milagro, Milagro, Ecuador
- 4University of Sharjah, College of Communication, Sharjah, United Arab Emirates
- 5ESAI Business School, Universidad de Especialidades Espiritu Santo, Guayaquil, Ecuador
The increasing integration of generative artificial intelligence (AI) in digital health is transforming apomediation into AIMediation, which reconfigures patient autonomy and raises ethical concerns that must be addressed. This study examines how algorithmic curation, personalized interfaces, and conversational agents redefine what information becomes visible and trustworthy, generating an illusion of autonomy that can mask the erosion of real decision-making capacity. Based on an exploratory synthesis of the recent literature (n = 38), three dimensions are analyzed: algorithmic intermediation, perceived autonomy, and informational vulnerability, with attention to cognitive overload and the amplification of biases in seeking health information. Evidence indicates that AIMediation can improve access to and understanding of health information but also intensify risks, such as misinformation and reliance on opaque outcomes, posing challenges to safeguarding transparency, patient agency, and equitable access to reliable information.
1 Introduction
Digital healthcare has decentralized the information ecosystem, transforming patients into prosumers capable of interacting with networks, algorithms, interfaces, metrics, and medical content (Eysenbach, 2008; Kleinman and Barad, 2012). This relational scenario grants users an active role in building the reliability, expertise, and value of recommendations not only through the content itself but also through their interaction with the platforms. The novel and risky aspect of this phenomenon is that the quality of information no longer resides solely in its content but also in the algorithms that organize, label, and trace sources, prioritizing their reputation, explainability, and visibility under an automated logic.
Eysenbach (2008) termed this transformation “apomediation” as a structural feature of what is known as Medicine 2.0. This process allows access to informational guidance without professional barriers and enables digital experiences that are comparable to those in the physical world. Today, it is common for patients to seek medical information online before attending a consultation (van der Westhuizen et al., 2025), exercising apparent autonomy, supported by verifiable metadata and algorithmic intermediation of platforms (Eysenbach, 2009, 2023; Lederman and Gray, 2025).
In the last decade, artificial intelligence systems have intensified these processes. Medical chatbots, recommendation algorithms, and predictive systems not only mediate access to information but also personalize it, generating a tailored information diet that relegates patients to a passive position in their decision-making processes (Mennella et al., 2024). In this context, Romero-Rodríguez and Castillo-Abdul (2025) proposed the term AIMediation to describe this new phase of automated filtering that generates an “illusion of autonomy” (Grote and Berens, 2020), in which decisions are perceived as independent but conditioned by opaque algorithmic structures, training biases, and commercial logic.
This paradox is exacerbated by information overload, decision fatigue, and reinforcement of cognitive biases, especially when patients resort to mental shortcuts such as rankings, brands, or reviews (Zhong et al., 2024). While the dominant discourse in digital health promotes control and individual choice, decisions are made within information architectures managed by algorithms, which determine which narratives gain visibility and which options are silenced, resulting in an asymmetric decision space that is perceived as neutral (Refolo et al., 2025).
This gap between perceived and effective autonomy, mediated by invisible curation processes, poses significant challenges in bioethics, public health, and health communication (Almela-Baeza et al., 2025; Rubinelli, 2025). This study aims to examine how algorithmic curation, personalized interfaces, and conversational agents redefine what information becomes visible and trustworthy, generating an illusion of autonomy that can mask the erosion of real decision-making capacity. Analyzing this transformation is key to understanding how AI-mediated decision-making reconfigures patient decision-making, amplifies the risk of misinformation, and redefines trust frameworks in the age of artificial intelligence.
For this mini-review, an exploratory search was conducted in the Scopus, WebExpert, PubMed, and Google Scholar databases. Combinations of terms such as “apomediation,” “generative artificial intelligence,” “patient autonomy,” “health misinformation,” and “decision-making” were used. Peer-reviewed articles in English or Spanish were considered if they explicitly addressed at least two of the following themes: information mediation in health, use of artificial intelligence, patient autonomy, and associated cognitive and ethical risks.
Approximately 120 publications, including their titles and abstracts, were reviewed for this study. Of these, 38 were selected for a full analysis owing to their theoretical rigor and direct contributions to the proposed framework. The most frequent exclusions were due to an exclusively technical focus without implications for decision-making or communication in health, clinical approaches lacking reflection on the role of artificial intelligence in patient autonomy, and a lack of theoretical depth in addressing the phenomenon of AI in healthcare. Empirical studies, reviews, conceptual essays, and academic editorials were integrated to construct a critical and representative synthesis of AI-mediated phenomena in the context of digital health.
2 Artificial intelligence and the reconfiguration of patient autonomy
Evidence suggests that the transition from traditional professional mediation to apomediation environments places patients in a more active role in searching for and selecting information, but within platforms that filter, order, and prioritize what is visible and credible (Eysenbach, 2008, 2009, 2023; van der Westhuizen et al., 2025; Lederman and Gray, 2025). The autonomy exercised in these contexts is, from the outset, conditioned by algorithmic architectures that define the range of information available.
Within this framework, the emergence of artificial intelligence reconfigures the landscape by shifting the center of gravity toward automation. Virtual assistants, chatbots, and conversational agents have emerged as the preferred channels for accessing clinical reasoning, personalized responses, and 24/7 guidance (Capasso and Umbrello, 2022; Sezgin et al., 2020; Barreda et al., 2025). Far from simply complementing existing information, these tools structure the patient’s information journey: AI discovers, prioritizes, and organizes content, while the user receives recommendations whose traceability is difficult to verify (Mennella et al., 2024; Reuben et al., 2024; Romero-Rodríguez and Castillo-Abdul, 2025). In this process, autonomy shifts from the ideal of informed deliberation to the pragmatic acceptance of algorithmically generated responses.
Specific effects on the doctor-patient relationship were also observed. Some studies have indicated that AI-generated responses can be perceived as more empathetic or of higher quality than those from certified professionals (Berg et al., 2024), introducing a symbolic competition for patient trust. Simultaneously, many physicians are integrating AI as an ally to manage high volumes of consultations without relinquishing their ability to verify and contextualize the recommendations (Branda et al., 2025). Patients, on the other hand, appear more exposed to the “illusion of autonomy”: they feel they are making their own decisions, but they do so within informational frameworks tightly configured by algorithms that redefine what information is visible, plausible, and actionable. Overall, AI mediation modifies information flows and reconfigures the operational meaning of “autonomy” in healthcare decision-making.
3 Informational risks and cognitive vulnerabilities
The available evidence converges on the idea that AI-mediated information overlaps with and amplifies preexisting cognitive vulnerabilities. Several studies have shown that the abundance and fragmentation of information options are associated with cognitive overload and a greater reliance on mental shortcuts (position of results, platform brand, appearance of professionalism) as criteria for making quick decisions (Zhong et al., 2024). In the realm of mobile health applications, information overload is linked to excessive use of healthcare services and changes in key perceptions of the Health Belief Model (severity, susceptibility, barriers, and self-efficacy), introducing systematic biases in the assessment of risks and benefits (Zhong et al., 2024). In clinical settings, the volume and complexity of electronic health records are linked to increased cognitive load and errors or adverse effects resulting from their use (Asgari et al., 2024). The literature on decision fatigue also describes a gradual decline in decision quality in high-friction contexts, with an increased reliance on shortcuts and decreased diagnostic accuracy as shifts progress (Perry et al., 2025; Grignoli et al., 2025; Maier et al., 2025). These factors encourage the uncritical acceptance of AI-generated recommendation.
The analysis of algorithmic personalization adds another layer of risk to this process. Several studies have indicated that systems tend to reinforce prior preferences and search habits, creating echo chambers and confirmation dynamics that reduce exposure to conflicting or corrective information (Bykov and Medvedeva, 2024; Refolo et al., 2025). In the healthcare field, this logic increases the likelihood of encountering misinformation and hinders its subsequent correction, as each new interaction confirms the same interpretive framework as previous interactions. From our perspective, the convergence of information overload, decision fatigue, confirmation bias, and opaque curation functions as a structural mechanism that contributes to the erosion of the patient’s real autonomy, even when the patient subjectively experiences their behavior as a free exercise of choice.
4 Emerging benefits and empowerment scenarios
Although much of the literature emphasizes the risks associated with AI-mediated communication, the reviewed studies agree that it would be reductionist to characterize it solely as a threat to human autonomy. Several studies have shown that AI systems can significantly improve the temporal and geographical accessibility of health information. Chatbots, virtual assistants, and consultation platforms allow users to ask questions, receive guidance, and access educational resources at any time without depending on the immediate availability of a professional (Sezgin et al., 2020; Barreda et al., 2025; Costa and Serra, 2025). In the context of health crises or high pressure on healthcare systems, this continuous responsiveness emerges as a relevant component of the information ecosystem, inheriting the logic of apomediation but enhanced by the automation inherent in AI-mediated communication.
The literature also highlights the potential of generative models to simplify specialized terminology and adapt explanations to a user’s level of understanding. Studies focusing on ChatGPT and similar tools indicate that the ability to ask follow-up questions, request clarification, and reconstruct examples facilitates the understanding of diagnoses, treatments, and care options, contributing to more interactive health literacy processes (Kacer, 2025; Riedel et al., 2023; Townsend et al., 2023). Although comparative analyses indicate that traditional search engines still outperform chatbots in the reliability of public information under certain conditions (Nelson et al., 2025), the dialogic dimension of AI introduces forms of cognitive support that were not present in apomediation based exclusively on social and reputational filters.
The third group of contributions comes from applications specifically designed for preliminary diagnostic guidance, symptom validation, and access to educational materials, such as Ada Health Companion and MayamD, which attempt to integrate clinical criteria and human oversight mechanisms into the interpretation of information. In parallel, in the professional sphere, AI is being incorporated to support the management of large volumes of inquiries and to feed predictive models that guide prevention policies and crisis preparedness strategies, for example, in the field of vaccines (Branda et al., 2025; El Arab et al., 2025). Taken together, these scenarios paint a picture of AI-mediated intervention that not only erodes but could also expand certain dimensions of patient agency, provided that design and governance frameworks exist to contain its most problematic effects in the future. In the transition model from apomediation to AI-mediated intervention, these emerging benefits coexist with the erosion of autonomy at the heart of the figure, demonstrating that the same technical-informational framework can either enable or restrict decision-making capacity, depending on its configuration and regulation.
5 Erosion of autonomy: practical implications
The practical implications of this mini-review are articulated around the conceptual model of the transition from apomediation to AIMediation (see Figure 1), which places the “current erosion of autonomy” at the center of the intersection of three axes: algorithmic intermediation, perceived autonomy and informational vulnerability. The aim is to synthesize the idea that what is at stake is not only the intrinsic quality of the content but also how digital systems decide what information becomes visible, reliable, and actionable. From this perspective, autonomy can no longer be understood as a purely individual attribute but as an emergent result of the interaction between technical infrastructure, cognitive frameworks and social contexts.
Regarding algorithmic intermediation, the implications point to the need for regulatory frameworks that define prioritization criteria, make recommendation logics explicit, and guarantee the traceability of the sources that feed AI responses to users. The reviewed literature suggests that the opacity of these processes contributes to both misinformation and an illusory sense of control over one’s own decisions (Bykov and Medvedeva, 2024; Refolo et al., 2025; Wang et al., 2025). Regarding perceived autonomy, the challenge shifts to designing interfaces and reputation systems that not only build trust but also encourage critical reflection and comparison of options, preventing authority indicators from replacing informed evaluation. Finally, the information vulnerability axis highlights the urgent need to strengthen media and health literacy, as well as to identify and mitigate situations of information overload, decision fatigue, and sustained exposure to misinformation (Almela-Baeza et al., 2025; Zhong et al., 2024; Asgari et al., 2024).
From these three dimensions, a practical agenda emerges that combines technical, educational and regulatory interventions. Among the lines of action that stand out in the literature are the clinical validation of AI tools used in healthcare, systematic incorporation of human oversight in high-risk scenarios, the requirement for transparency and accountability from developers and platforms, and the development of training programs focused on the critical use of AI technologies in healthcare settings (Bykov and Medvedeva, 2024; Almela-Baeza et al., 2025; El Arab et al., 2025). Within the framework of the presented model, these interventions can be understood as attempts to shift the focus from the erosion of autonomy towards a more favorable balance between protection and empowerment, taking advantage of the benefits of AIMediation without relinquishing the responsibility to safeguard the real decision-making capacity of patients and professionals.
6 Discussion and conclusions
The reviewed evidence shows that AIMediation is not limited to introducing a new technology into the digital health ecosystem but rather reconfigures the notion of patient autonomy and the architecture of apomediation (Eysenbach, 2008, 2009, 2023; van der Westhuizen et al., 2025). Recommendation systems, chatbots, and generative models shift the focus from apomediation based on social filters and reputational signals toward a regime in which artificial intelligence discovers, selects, prioritizes, and, in many cases, generates responses that guide clinical decisions (Mennella et al., 2024; Fontaines-Ruiz et al., 2025; Romero-Rodríguez and Castillo-Abdul, 2025). In this transition, autonomy ceases to be understood as a deliberative exercise based on a comparison of sources and instead relies on conversational interfaces that offer immediate, personalized, and plausible solutions to problems. From this emerges the “illusion of autonomy”: patients perceive that they are deciding for themselves, but they do so within algorithmically preconfigured frameworks of visibility and meaning, where the opacity of training and optimization criteria limits the capacity for critical scrutiny of the available information (Grote and Berens, 2020; Born et al., 2024; Canady and Larzo, 2023; Rubinelli, 2025).
Simultaneously, this shift places AIMediation in a structural tension between vulnerability and empowerment. On the one hand, it exacerbates already documented risks: information overload in applications and clinical records, decision fatigue, and greater reliance on cognitive shortcuts in high-friction decision contexts, with direct effects on the quality of clinical judgment and patient autonomy (Zhong et al., 2024; Asgari et al., 2024; Perry et al., 2025; Grignoli et al., 2025; Maier et al., 2025). These dynamics combine with information architectures that can reinforce confirmation bias, echo chambers, and asymmetries in information access (Bykov and Medvedeva, 2024; Refolo et al., 2025; Surrenti and Di Felice, 2025). However, these devices also demonstrate the capacity to expand certain forms of agency by improving accessibility, simplifying specialized language, and supporting health comprehension and learning processes, with potential positive effects on information literacy and equity (Sezgin et al., 2020; Barreda et al., 2025; Kacer, 2025; Riedel et al., 2023; Nelson et al., 2025). The subsequent discussion builds upon this tension, delving into the reconfiguration of patient autonomy under regimes of apomediation and AIMediation, examining the risks of misinformation and the cognitive vulnerabilities that underpin them, reviewing the main empowerment scenarios identified in the literature, and exploring the normative and public policy implications that this new regime of information mediation poses for health systems (Almela-Baeza et al., 2025; El Arab et al., 2025; Wang et al., 2025).
This study examined how the transition from apomediation to AIMediation reconfigures healthcare decision-making, showing that patient autonomy can no longer be understood as an individual attribute but as the result of the interaction between algorithms, interfaces, and information-saturated cognitive frameworks. The evidence positions AIMediation as a double-edged sword: it can lead to information overload, decision fatigue, biases, and misinformation, but it also opens opportunities for accessibility, understanding, and support in healthcare. The proposed model, which locates the erosion of autonomy at the intersection of algorithmic intermediation, perceived autonomy, and information vulnerability, offers a synthetic framework for understanding these tensions and guiding interventions beyond simply improving content quality.
Based on these findings, an agenda emerges that combines regulation, responsible design, and critical literacy. AI governance in healthcare should advance in terms of transparency, prioritization criteria, clinical validation, and human oversight in high-risk scenarios, while patients and professionals need greater skills to interpret and contextualize algorithmic recommendations. Key future research lines include empirically operationalizing the model’s core principles in different contexts, evaluating the design and training interventions that mitigate risks without sacrificing benefits, and analyzing how AI mediation interacts with inequalities and the social determinants of health. Taken together, these approaches aim to shift the focus from the erosion of autonomy to configurations in which artificial intelligence effectively contributes to freer, more informed, and fairer decisions.
Author contributions
AP-R: Conceptualization, Visualization, Writing – original draft. TF-R: Conceptualization, Investigation, Writing – original draft. LR-R: Conceptualization, Methodology, Writing – review & editing.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was used in the creation of this manuscript. Generative AI tools were used exclusively to assist in refining the English translation of the manuscript from its original Spanish version, without altering the scientific content. No generative AI was used for analysis, interpretation, writing, or generation of original research material.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Almela-Baeza, J., Ferrigno, C., and Febrero, B. (2025). Use of social media by health science degree students in the field of organ donation and transplantation. J. Media 6:113. doi: 10.3390/journalmedia6030113
Asgari, E., Kaur, J., Nuredini, G., Balloch, J., Taylor, A., Sebire, N., et al. (2024). Impact of electronic health record use on cognitive load and burnout among clinicians: narrative review. JMIR Med. Inform. 12:e55499. doi: 10.2196/55499,
Barreda, M., Cantarero-Prieto, D., Coca, D., Delgado, A., Lanza-León, P., Lera, J., et al. (2025). Transforming healthcare with chatbots: uses and applications-a scoping review. Digital Health 11:20552076251319174. doi: 10.1177/20552076251319174,
Berg, H. T., van Bakel, B., van de Wouw, L., Jie, K. E., Schipper, A., Jansen, H., et al. (2024). ChatGPT and generating a differential diagnosis early in an emergency department presentation. Ann. Emerg. Med. 83, 83–86. doi: 10.1016/j.annemergmed.2023.08.003,
Born, C., Schwarz, R., Böttcher, T. P., Hein, A., and Krcmar, H. (2024). The role of information systems in emergency department decision-making: a literature review. J. Am. Med. Inform. Assoc. 31, 1608–1621. doi: 10.1093/jamia/ocae096,
Branda, F., Stella, M., Ceccarelli, C., Cabitza, F., Ceccarelli, G., Maruotti, A., et al. (2025). The role of AI-based chatbots in public health emergencies: a narrative review. Fut. Int. 17:145. doi: 10.3390/fi17040145
Bykov, I., and Medvedeva, M. V. (2024). “Media literacy and AI-technologies in digital communication: opportunities and risks” in 2024 communication strategies in digital society seminar (ComSDS), Saint Petersburg, Russia: IEEE Xplore. 21–24.
Canady, B. E., and Larzo, M. (2023). Overconfidence in managing health concerns: the dunning-Kruger effect and health literacy. J. Clin. Psychol. Med. Settings 30, 460–468. doi: 10.1007/s10880-022-09895-4,
Capasso, M., and Umbrello, S. (2022). Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants. Med. Health Care Philos. 25, 11–22. doi: 10.1007/s11019-021-10062-z,
Costa, D., and Serra, R. (2025). The role of communication in managing chronic lower limb wounds. J. Multidiscip. Healthc. 18, 3685–3708. doi: 10.2147/JMDH.S533416,
El Arab, R. A., Alkhunaizi, M., Alhashem, Y. N., Al Khatib, A., Bubsheet, M., and Hassanein, S. (2025). Artificial intelligence in vaccine research and development: an umbrella review. Front. Immunol. 16:1567116. doi: 10.3389/fimmu.2025.1567116,
Eysenbach, G. (2008). Medicine 2.0: social networking, collaboration, participation, apomediation, and openness. J. Med. Internet Res. 10:e11. doi: 10.2196/jmir.1030,
Eysenbach, G. (2009). Infodemiology and infoveillance: framework for an emerging set of public health informatics methods to analyze search, communication and publication behavior on the internet. J. Med. Internet Res. 11:e46885. doi: 10.2196/jmir.1157,
Eysenbach, G. (2023). The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med. Educ. 9:e46885. doi: 10.2196/46885,
Fontaines-Ruiz, T., Romero-Rodríguez, L. M., Ponce Rojo, A., and Herrera, D. P. R. (2025). De la información a la apomediación: Interacciones, temáticas y sentimientos sobre dolor lumbar en YouTube. El Prof. Inf. 34:e34107. doi: 10.3145/epi.2025.ene.34107
Grignoli, N., Manoni, G., Gianini, J., Schulz, P., Gabutti, L., and Petrocchi, S. (2025). Clinical decision fatigue: a systematic and scoping review with meta-synthesis. Family Med. Commun. Health 13:e003033. doi: 10.1136/fmch-2024-003033,
Grote, T., and Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. J. Med. Ethics 46, 205–211. doi: 10.1136/medethics-2019-105586,
Kacer, E. O. (2025). Evaluating AI-based breastfeeding chatbots: quality, readability, and reliability analysis. PLoS One 20:e0319782. doi: 10.1371/journal.pone.0319782,
Lederman, R., and Gray, K. (2025). “Introduction to health information systems research” in Research handbook on health information systems (Cheltenham, UK: Edward Elgar Publishing), 1–12.
Maier, M., Powell, D., Murchie, P., and Allan, J. L. (2025). Systematic review of the effects of decision fatigue in healthcare professionals on medical decision-making. Health Psychol. Rev. 19, 717–762. doi: 10.1080/17437199.2025.2513916,
Mennella, C., Maniscalco, U., De Pietro, G., and Esposito, M. (2024). Ethical and regulatory challenges of AI technologies in healthcare: a narrative review. Heliyon 10:e26297. doi: 10.1016/j.heliyon.2024.e26297,
Nelson, H. C., Beauchamp, M. T., and Pace, A. A. (2025). The reliability gap: how traditional search engines outperform artificial intelligence (AI) chatbots in Rosacea public health information quality. Cureus 17:e86543. doi: 10.7759/cureus.86543,
Perry, K., Jones, S., Stumpff, J. C., Kruer, R., Czosnowski, L., Kashiwagi, D., et al. (2025). Decision fatigue in hospital settings: a scoping review. J. Hosp. Med. 20, 385–395. doi: 10.1002/jhm.13550,
Refolo, P., Sacchini, D., Raimondi, C., Masilla, S. S., Corsano, B., Mercuri, G., et al. (2025). Should artificial intelligence-based patient preference predictors be used for incapacitated patients? A scoping review of reasons to facilitate medico-legal considerations. Healthcare 13. doi: 10.3390/healthcare13060590,
Reuben, J. S., Meiri, H., and Arien-Zakay, H. (2024). AI’s pivotal impact on redefining stakeholder roles and their interactions in medical education and health care. Front. Digit. Health 6:1458811. doi: 10.3389/fdgth.2024.1458811,
Riedel, M., Kaefinger, K., Stuehrenberg, A., Ritter, V., Amann, N., Graf, A., et al. (2023). Chatgpt’s performance in German OB/GYN exams—paving the way for AI-enhanced medical education and clinical practice. Front. Med. 10:1296615. doi: 10.3389/fmed.2023.1296615,
Romero-Rodríguez, L. M., and Castillo-Abdul, B. (2025). From apomediation to AImediation: generative AI and the reconfiguration of informational authority in health communication. J. Prim. Care Community Health 16. (in press). doi: 10.1177/21501319251381878,
Rubinelli, S. (2025). The paradox of autonomy when disinformation masquerades as health information. Patient Educ. Couns. 139:109232. doi: 10.1016/j.pec.2025.109232
Sezgin, E., Huang, Y., Ramtekkar, U., and Lin, S. (2020). Readiness for voice assistants to support healthcare delivery during a health crisis and pandemic. NPJ Digit. Med. 3:122. doi: 10.1038/s41746-020-00332-0,
Surrenti, S., and Di Felice, M. (2025). Rethinking social action through the info-ecological dimensions of two collaborative public health platforms: the people’s health movement and the citizen sense project platforms as examples of health-net-activism. Front. Sociol. 10:1602858. doi: 10.3389/fsoc.2025.1602858,
Townsend, B. A., Plant, K. L., Hodge, V. J., Ashaolu, O., and Calinescu, R. (2023). Medical practitioner perspectives on AI in emergency triage. Front. Digit. Health 5:1297073. doi: 10.3389/fdgth.2023.1297073
van der Westhuizen, E., Pottas, D., and Petratos, S. (2025). “Unveiling the power of apomediation: perspectives from individuals living with autoimmune disease” in Communications in Computer and Information Science (Cham, Switzerland: Springer Nature Switzerland), 348–365.
Wang, Y., Gao, C., Li, S., and Deng, Q. (2025). Credibility and adoption of online health information shared by parents: a study of young adults in mainland China. J. Med. Human. Media 3, 16–45. doi: 10.62787/mhm.v3i3.209
Keywords: AIMediation, apomediation, cognitive overload, decision-making, digital health, health communication, health information quality, patient autonomy
Citation: Ponce-Rojo A, Fontaines-Ruiz T and Romero-Rodríguez LM (2026) Apomediation, AI, and the illusion of autonomy: risks of misinformation in patient decision-making. Front. Commun. 10:1684370. doi: 10.3389/fcomm.2025.1684370
Edited by:
Styliani A. Geronikolou, National and Kapodistrian University of Athens, GreeceReviewed by:
George Drosatos, Athena Research Center, GreeceCopyright © 2026 Ponce-Rojo, Fontaines-Ruiz and Romero-Rodríguez. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Luis M. Romero-Rodríguez, bHJvbWVyb0BzaGFyamFoLmFjLmFl
Antonio Ponce-Rojo1