Abstract
Artificial intelligence (AI) is reshaping medical education, particularly in the teaching of physical examination and the development of clinical judgement in digitally mediated contexts. This study presents a critical narrative review examining the ethical, pedagogical, and humanistic implications of AI integration into physical examination training in Latin America. A structured search of literature published between 2018 and 2025 was conducted across PubMed, Scopus, Web of Science, SciELO, and Google Scholar. Thirty-one peer-reviewed studies and three institutional documents met predefined relevance criteria and were analyzed through thematic synthesis. Four thematic domains emerged: (1) AI-assisted clinical simulation and automated feedback, (2) curricular integration and institutional implementation strategies, (3) governance and ethical supervision frameworks, and (4) emerging challenges related to digital literacy, technological dependence, and preservation of clinical judgement. Evidence suggests that AI enhances procedural precision and formative feedback; however, its educational value remains complementary and dependent on structured human-in-the-loop supervision. Based on these findings, the Modelo Educativo Digital basado en Inteligencia Artificial (or Medical Education with Artificial Intelligence) (MED-IA) conceptual model is proposed, framing clinical competence development across three interconnected levels: technical execution, experiential patient interaction, and reflective judgement. The model integrates technological mediation with ethical oversight and humanistic formation. These findings highlight the need for transparent governance frameworks, teacher digital literacy, and context-sensitive institutional policies to ensure responsible AI implementation in Latin American medical education.
1 Introduction
Artificial intelligence (AI) has rapidly expanded within medical education, influencing both theoretical instruction and practical training, particularly in the teaching of physical examination and the development of clinical judgement (Gordon et al., 2024; Feigerlova et al., 2025). Applications such as intelligent simulators, adaptive tutors, learning analytics, and automated feedback systems have demonstrated potential to improve procedural accuracy, enhance formative assessment, and increase the safety of clinical training environments (Li et al., 2025; Díaz-Guio et al., 2026; Ortiz-Vilchis and Castillo-Reyes, 2025). These developments suggest that AI can serve as a technical scaffold to strengthen skill acquisition before students engage in real patient encounters.
In Latin America, however, the incorporation of these technologies remains uneven. Structural barriers—including high implementation costs, limited institutional digitalization, insufficient teacher training, and regulatory fragmentation—restrict systematic integration across medical schools (Aguilar-Bucheli et al., 2023; Marín González et al., 2025). Although emerging regional experiences highlight the benefits of AI-assisted simulation and digital tutoring, governance frameworks, curricular integration models, and ethical supervision mechanisms remain inconsistently developed (Avello-Sáez et al., 2024; Corzo-Zavaleta et al., 2025; Wolff Reyes and López Stewart, 2025). This heterogeneity reflects differences in digital maturity, institutional capacity, and regulatory oversight across countries.
Beyond technical innovation, the teaching of physical examination carries a distinctive epistemological and humanistic dimension. Physical examination is not merely a procedural sequence of inspection, palpation, percussion, and auscultation; it is also a relational act grounded in observation, touch, interpretation, empathy, and communication. The integration of AI into this domain therefore raises fundamental questions about supervision, clinical autonomy, preservation of professional judgement, and the maintenance of the doctor–patient relationship as the core of medical formation.
Despite growing literature on AI in medical education, fewer analyses critically examine how these technologies reshape the teaching of physical examination within specific regional contexts and what ethical and pedagogical implications emerge from their implementation. In Latin America, where institutional resources and regulatory frameworks vary substantially, understanding this transformation requires both contextual sensitivity and conceptual clarity.
Accordingly, this review is guided by the following research question:
How is AI reshaping the teaching of physical examination in Latin American medical education, and what ethical and pedagogical implications arise from its integration?
Accordingly, this study is positioned as a critical narrative review with conceptual synthesis, situated within the interpretative paradigm of medical education research. Rather than aiming to produce quantitative aggregation of homogeneous empirical evidence, the purpose is to critically examine theoretical, pedagogical, and governance-oriented contributions related to AI-assisted physical examination teaching and to articulate an integrative conceptual framework grounded in the Latin American context. This epistemological positioning clarifies that the manuscript combines documentary analysis with theoretical modeling to guide responsible and human-centered implementation.
2 Methodology
A critical theoretical and documentary reflection was developed to examine the role of AI in the teaching of physical examination in medicine, emphasizing its ethical, pedagogical, and humanistic dimensions. The study was based on the collection, selection, and interpretative analysis of recent academic literature and institutional documents, chosen for their conceptual relevance, currency, and applicability to contemporary medical education.
The documentary search was conducted between January 2018 and December 2025 in PubMed, Scopus, Web of Science, SciELO, RedALyC, and Google Scholar. Descriptors and keywords in English and Spanish were combined, including “artificial intelligence,” “clinical skills,” “physical examination,” “medical education,” “Latin America,” “machine learning,” “human-in-the-loop,” “ethics,” and “medical curriculum.” Priority was given to open-access or institutional texts that directly or indirectly addressed the integration of AI into the teaching–learning process of physical examination, clinical simulation, automated feedback, and the development of professional competences in medicine.
The search strategy was structured using Boolean operators and controlled vocabulary adapted to each database. The core search string applied in PubMed and Scopus was:
(“artificial intelligence” OR “machine learning” OR “generative AI”)
AND (“medical education” OR “clinical skills” OR “physical examination”)
AND (“Latin America” OR “Brazil” OR “Mexico” OR “Chile” OR “Colombia” OR “Peru” OR “Ecuador”)
Equivalent Spanish descriptors were used in SciELO and RedALyC. Filters were applied to restrict results to publications between January 2018 and December 2025.
Inclusion criteria comprised:
(1) peer-reviewed articles or institutional reports,
(2) explicit reference to AI applications in medical education,
(3) conceptual or empirical discussion of clinical skills, simulation, or physical examination, and
(4) contextual relevance to Latin America or comparative global frameworks informing regional interpretation.
Exclusion criteria included:
(1) purely technical engineering studies without educational implications,
(2) publications focused exclusively on diagnostic AI without training components,
(3) opinion pieces lacking conceptual or empirical grounding, and
(4) articles not available in English or Spanish.
Screening and eligibility assessment were conducted independently by two authors, with discrepancies resolved through consensus discussion to ensure methodological consistency.
To strengthen methodological transparency, a structured screening process was implemented and is summarized in Figure 1. The initial search identified 112 potentially relevant records. After the removal of 18 duplicates, 94 records underwent title and abstract screening. Thirty-six were excluded for not addressing educational applications or for focusing exclusively on technical developments without formative implications. Fifty-eight full-text documents were subsequently assessed for eligibility. Twenty-four were excluded due to insufficient pedagogical, ethical, or contextual relevance. Ultimately, 34 references constituted the final documentary corpus of analysis, including peer-reviewed scientific articles and institutional reports. This corpus formed the basis for the interpretative synthesis and thematic structuring of the study, ensuring coherence between the analytical objectives, the selected evidence, and the conceptual framework developed throughout the manuscript.
Figure 1
The country filters included Brazil, Mexico, Chile, Colombia, Peru, and Ecuador because preliminary scoping searches indicated that these countries concentrate the majority of indexed publications on AI in medical education within the Latin American region. The broader descriptor “Latin America” was also included to capture studies from other countries not explicitly listed in the search string.
The selected material was grouped into two main sets. The first comprised peer-reviewed scientific publications analyzing the incorporation of AI in practical teaching, clinical reasoning, and medical decision-making. The second included institutional documents and normative frameworks from international organizations such as the World Health Organization (WHO), the United Nations Educational, Scientific and Cultural Organization (UNESCO), and the Inter-American Development Bank (IDB), which provided legal, ethical, and regulatory context for the analysis.
Although Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and structured appraisal tools [e.g., Scale for the Assessment of Narrative Review Articles (SANRA)] were not formally applied, this decision was epistemologically consistent with the study's classification as a critical narrative review with conceptual synthesis. The objective was not to aggregate homogeneous empirical data or perform quantitative synthesis, but to interpret theoretical, pedagogical, and governance-oriented contributions within a contextualized Latin American framework.
To reinforce internal validity, methodological transparency was ensured through explicit reporting of the search strategy, defined eligibility criteria, structured screening stages, and thematic coding procedures. Conceptual relevance to physical examination teaching, ethical governance considerations, and alignment with the human-in-the-loop analytical framework were used as internal evaluative criteria. Interpretative triangulation among authors was conducted to reduce subjective bias and strengthen analytical coherence.
The analysis was carried out through comprehensive reading, thematic coding, and interpretative triangulation among the authors, identifying three central axes: barriers and opportunities for the incorporation of AI in physical examination teaching; curricular and methodological transformations resulting from its implementation; and ethical and humanistic implications related to the preservation of clinical judgement and patient interaction under the human-in-the-loop principle.
The analytical axes guided the structured presentation of findings through tables and figures that synthesize the reviewed evidence and conceptual contributions. Methodological consistency was maintained through triangulation of criteria among the authors, prioritizing interpretative clarity, internal coherence, and alignment with the study's objectives.
Finally, methodological rigor and transparency were reinforced through systematic documentation of sources, interpretative triangulation, and inter-evaluator validation to ensure internal consistency in data interpretation, following internationally recognized standards of ethical and scientific reliability.
In accordance with the epistemological orientation of critical narrative reviews, methodological rigor was understood as conceptual coherence, transparency in source selection, and analytical triangulation rather than statistical aggregation or quantitative synthesis.
3 Results
To contextualize the regional findings derived from the screened documentary corpus, a comparative synthesis between global and Latin American scientific production is presented in Table 1. While the structured search strategy primarily targeted literature relevant to Latin American medical education, selected high-level international reviews and consolidated frameworks were incorporated as secondary analytical benchmarks. Global publication volumes and trends cited in this manuscript derive from these published syntheses rather than from an independent systematic global search conducted by the authors. The purpose of this comparison is interpretative rather than quantitative, aiming to situate regional developments within broader internationally reported patterns without claiming comprehensive global coverage.
Table 1
| Dimension | Global evidence (2018–2025) | Latin American evidence (2018–2025) | Main gap |
|---|---|---|---|
| Scientific production | High and systematized, with over 270 studies and consolidated frameworks (Topol, 2019; Gordon et al., 2024; Feigerlova et al., 2025). | Recent, scattered growth with lower methodological rigor (Aguilar-Bucheli et al., 2023; Ortiz-Vilchis and Castillo-Reyes, 2025). | Greater regional coordination and applied empirical studies. |
| Curricular integration | Frameworks and formal programs linking AI with practical assessment (Tolentino et al., 2024). | Partial introduction in courses or pilot projects (Ramírez et al., 2025). | Formalize the teaching of AI within the medical curriculum. |
| Clinical simulation | Widespread use of haptic simulators and intelligent feedback (Li et al., 2025). | Isolated and virtual experiences (Ávila Rueda et al., 2024). | Scale up AI-based simulation and structured assessment. |
| Performance assessment | AI-assisted OSCE and mini-CEX with objective metrics (Linares et al., 2023; Yokose et al., 2025). | Self-assessments without standardization (Rognoni Amrein et al., 2024). | Implement validated objective assessment tools. |
| Ethics and governance | Defined ethical frameworks and teacher supervision (Roveta et al., 2025; Masters, 2019). | Initial initiatives in Chile and Peru (Avello-Sáez et al., 2024). | Consolidate national regulations and ethical training. |
| Teaching role | Teacher as critical mediator (human-in-the-loop) (Feigerlova et al., 2025). | Irregular supervision and low digital competence (Wolff Reyes and López Stewart, 2025). | Systematic training in educational AI. |
Global and regional comparison of scientific evidence on AI in medical education (2018–2025).
Source: Author's own elaboration.
In this section, the analysis is structured around four thematic axes that emerged as recurrent and transversal domains during the bibliographic screening and thematic coding process. This organization enables the identification of common patterns, contrasting approaches, and gaps in the literature concerning AI-assisted physical examination within the Latin American context. Each axis presents a structured synthesis of the findings derived from the selected corpus, aiming to describe prevailing trends before engaging in interpretative discussion.
3.1 Scientific production and regional trends
Between 2018 and 2025, Latin American literature on AI in medical education has demonstrated progressive expansion. Within the analyzed corpus, publications reflect increasing academic interest after 2021, although overall production remains comparatively limited and uneven across countries. Bibliometric and scientometric reviews indicate that regional output is dispersed, predominantly published in Spanish, and characterized by heterogeneous methodological designs (Marín González et al., 2025).
Aguilar-Bucheli et al. (2023) identified structural barriers to AI integration in medical training, including high implementation costs, limited institutional digitalization, and insufficient faculty preparation. These findings are corroborated by Neves et al. (2025), whose scientometric mapping reports Brazil as the regional leader in publications and documents a notable increase in scientific output beginning in 2021. The dominant thematic areas include generative AI applications in learning, AI-enhanced gamification, algorithm-supported clinical decision-making, and digital platforms for tele-education. Similarly, Ramírez et al. (2025) describe a growing incorporation of AI into undergraduate curricula, particularly through simulation-based and virtual learning environments. However, most publications remain exploratory or descriptive in nature, with limited use of structured outcome measurements or longitudinal educational evaluation.
Across the corpus, contributions originate primarily from Brazil, Mexico, Chile, Colombia, Ecuador, and Peru, reflecting differentiated levels of digital maturity and institutional development. Although regional scientific activity is expanding, evidence suggests variability in methodological rigor, curricular formalization, and governance consolidation.
Thematic concentration within the analyzed corpus also appears uneven, with greater representation in primary care, clinical simulation, and general medical training contexts, whereas specialty-specific applications of AI in physical examination (e.g., cardiology or neurology) remain comparatively underrepresented in regional publications
3.2 Clinical simulation and AI-assisted digital tools
Clinical simulation and AI-assisted digital tools constitute one of the most recurrent domains identified within the analyzed corpus. The reviewed literature includes narrative reviews, conceptual analyses, and empirical studies conducted in simulation-based educational settings, reflecting diverse methodological approaches to the integration of AI in practical medical training.
Ávila Rueda et al. (2024), in a regional review focused on undergraduate medical education, describe the incorporation of intelligent clinical simulators and virtual tutors designed to support procedural skill acquisition. These systems enable structured rehearsal of inspection, palpation, percussion, and auscultation within standardized environments prior to patient interaction.
Díaz-Guio et al. (2026), analyzing simulation-based educational interventions, report that algorithm-driven feedback mechanisms may contribute to enhanced scenario realism and more structured formative assessment processes. However, most reported outcomes rely on short-term evaluations and perceived performance improvements rather than longitudinal clinical competence indicators.
The MED-IA framework, initially proposed by Chávez Mostajo (2026) as a competency-oriented digital model, has been further discussed by Ramírez et al. (2025), who explore its pedagogical implementation through digital tutors, learning analytics, and automated feedback systems in technology-enhanced learning environments. In these analyses, AI is conceptualized primarily as an instructional scaffold supporting guided supervision rather than as an autonomous evaluative agent.
Empirical contributions, such as those described by Rognoni Amrein et al. (2024), report improvements in procedural precision, structured feedback cycles, and learner self-efficacy in high-fidelity or haptic simulation contexts. Nevertheless, objective correlations between AI-assisted simulation and sustained bedside competence remain limited within the regional literature. Despite these limitations, several studies report measurable improvements in procedural sequencing accuracy and structured feedback efficiency when AI tools are integrated within supervised simulation environments, suggesting that early-stage technical gains may serve as foundational precursors to subsequent clinical competence development.
Across the corpus, AI-enhanced simulation is predominantly positioned as a complementary tool intended to reinforce practical training and formative feedback processes. Its adoption, however, appears conditioned by institutional infrastructure, technological access, and faculty digital preparedness, factors that vary across Latin American contexts.
3.3 Institutional models and implementation policies
A fourth thematic axis identified within the analyzed corpus concerns ethical governance, professional responsibility, and the preservation of clinical judgement in the context of AI-mediated medical education. Beyond technological implementation and curricular integration, this domain addresses the normative and humanistic dimensions that accompany the incorporation of AI into clinical training processes.
The reviewed literature consistently emphasizes that AI integration requires explicit supervisory frameworks to safeguard academic integrity, patient confidentiality, and professional accountability. Institutional declarations and policy-oriented documents highlight the need for transparency in the use of digital tools, clear acknowledgment of AI-assisted contributions, and structured faculty oversight in educational settings.
Several authors underscore the importance of maintaining the educator's role as a critical mediator under a human-in-the-loop approach. Within this perspective, AI is positioned as a supportive system operating under continuous human supervision rather than as an autonomous decision-making authority. This framework seeks to preserve reflective capacity, diagnostic reasoning, and professional judgement during the learning process.
Regional analyses further indicate heterogeneous regulatory development across Latin America. While institutions in countries such as Chile and Peru have initiated ethical declarations and academic guidelines for AI use, comprehensive and consolidated national frameworks remain under development. The literature reflects ongoing efforts to align technological innovation with educational responsibility, although institutional maturity varies across contexts.
The conceptual synthesis of these findings is represented in Figure 2, which illustrates the ethical governance triangle underpinning AI integration in medical education. This framework articulates three interdependent dimensions: institutional regulation, pedagogical supervision, and professional responsibility. Together, these components reflect the structural conditions identified in the reviewed corpus as necessary to ensure responsible and human-centered implementation of AI within medical training.
Figure 2
Across the analyzed literature, ethical governance is consistently presented not as an accessory consideration, but as a foundational condition for integrating AI into medical education. The convergence of institutional, pedagogical, and professional dimensions underscores the need to maintain clinical judgement and the human dimension of the doctor–patient relationship within digitally mediated learning environments.
Overall, the reviewed publications portray an expanding yet heterogeneous landscape in the application of AI within Latin American medical education. Although initiatives in simulation, institutional modeling, and ethical governance are increasingly reported, levels of implementation, regulatory development, and institutional readiness vary across national and academic contexts. These proposed competences and their ethical–professional articulation are synthesized in Table 2.
Table 2
| Dimension | Competence | Learning outcome | Performance indicator | Ethical–professional component |
|---|---|---|---|---|
| Technical | Applies AI-based clinical simulators and intelligent feedback tools to perform inspection, palpation, percussion, and auscultation accurately. | Demonstrates procedural precision and autonomy in virtual and real settings. | Accuracy and consistency in the execution of physical examination maneuvers. | Ensures patient safety and respects the limits of algorithmic assistance. |
| Cognitive | Integrates data analytics and machine learning outputs into diagnostic reasoning. | Interprets AI-generated information critically to support clinical hypotheses. | Provides reasoned decisions supported by quantitative and qualitative evidence. | Maintains human judgement as the central element of clinical decision-making. |
| Pedagogical | Engages in supervised learning under the human-in-the-loop model. | Collaborates with educators to validate interpretations and refine clinical techniques. | Demonstrates improvement through iterative feedback cycles. | Recognizes the role of the teacher as ethical supervisor and guarantor of learning integrity. |
| Ethical | Applies institutional and professional norms for responsible AI use in clinical education. | Identifies risks of dehumanization and bias in AI-assisted training. | Implements transparency and informed-consent procedures during simulated encounters. | Promotes empathy, respect, and accountability in all learning interactions. |
| Humanistic | Balances technological efficiency with empathy and communication. | Builds trust and therapeutic connection with patients in both simulated and real contexts. | Demonstrates emotional awareness and effective interpersonal communication. | Upholds dignity, compassion, and the centrality of the patient's experience. |
Curricular competences related to AI-assisted physical examination.
Source: Author's own elaboration.
3.4 Emerging ethical, pedagogical, and cultural challenges
The fourth category encompasses findings related to the ethical, pedagogical, and cultural challenges associated with the introduction of AI into Latin American medical education. Several publications emphasize the importance of strengthening digital literacy and critical thinking competencies among both students and faculty members as part of the educational response to emerging technologies.
From a regional perspective, Torres Salinas (2025) notes that the increasing use of AI tools among Latin American medical students may involve risks such as technological dependence, model accuracy limitations, and potential weakening of clinical judgement. These observations highlight the relevance of continuous supervision, structured training strategies, and explicit human-in-the-loop ethical frameworks within medical education contexts.
Similarly, Gutiérrez-Cirlos et al. (2023) analyzed the incorporation of ChatGPT in medical education, identifying pedagogical and research support advantages alongside concerns related to academic integrity and the reliability of generated clinical information. The authors describe moderate, supervised use and human validation as recurrent considerations in the literature.
Mayol (2023) describes how generative AI expands pedagogical possibilities through the development of clinical cases, personalized rubrics, and adaptive educational resources, while also introducing ethical considerations related to data privacy, algorithmic bias, and the evolving supervisory role of educators. (Wolff Reyes and López Stewart 2025) complement this perspective by reporting initial resistance to technological change among teachers and healthcare professionals, associated with concerns about professional displacement and limited specialized training.
Ávila Rueda et al. (2024) and Ramírez et al. (2025) further indicate that, despite advancements in simulation-based education, regulatory and ethical gaps continue to influence the equitable expansion of AI tools across the region. These publications describe increasing attention to digital ethics, technological responsibility, and preservation of human interaction within clinical teaching–learning processes.
Overall, the reviewed publications portray an expanding yet heterogeneous landscape in the application of AI within Latin American medical education. Although initiatives in simulation, institutional modeling, and ethical governance are increasingly reported, levels of implementation, regulatory development, and institutional readiness vary across national and academic contexts.
Global literature on AI in medical education has expanded rapidly; however, its methodological maturity and translation into measurable clinical outcomes remain heterogeneous. As a global benchmark, BEME guide no. 84 mapped 278 publications and documented AI applications across admission processes, teaching strategies, assessment models, and clinical reasoning development, while simultaneously emphasizing the need for explicit ethical frameworks, defined competencies, and structured curricular integration (Topol, 2019; Gordon et al., 2024). Systematic reviews focusing on educational outcomes similarly report limited trial robustness, heterogeneous methodological designs, and scarce objective measurements of sustained clinical performance (Feigerlova et al., 2025).
This international panorama provides a comparative reference for interpreting the Latin American context described in the Results. Regional literature demonstrates progressive growth, yet remains constrained by structural barriers including implementation costs, uneven digital infrastructure, and limited faculty training. Advances appear more consolidated in simulation-based environments and digital tutoring systems than in the objective assessment of physical examination competences in real clinical settings. Although global trends in AI in medical education provide a broader analytical background, the present synthesis consistently interprets these developments through the specific lens of physical examination teaching, ensuring that technological, governance, and ethical considerations remain anchored to clinical skill formation.
However, the Latin American context is not institutionally homogeneous. Scientometric mapping indicates that countries such as Brazil and Chile demonstrate comparatively higher research productivity and digital integration in medical education, whereas other contexts report emerging or pilot-level initiatives constrained by technological investment and regulatory consolidation (Neves et al., 2025; Marín González et al., 2025). Differences in national data protection frameworks and institutional governance structures further influence the pace and scope of AI implementation (United Nations Educational, Scientific and Cultural Organization, 2021; World Health Organization, 2021). Recognizing this heterogeneity prevents overgeneralization and underscores that AI-assisted physical examination training must be adapted to diverse institutional maturities across the region.
Within Latin America, the most consistent developments are concentrated in AI-assisted clinical simulation, learning analytics, and automated feedback supporting inspection, palpation, percussion, and auscultation training. This pragmatic orientation aligns with international recommendations advocating the integration of AI as a structured pedagogical scaffold—including intelligent tutors, haptic simulators, and real-time feedback mechanisms—embedded within explicit curricular frameworks (Tolentino et al., 2024; Mir et al., 2023).
At the same time, regional publications emphasize that technological adoption is strongly conditioned by institutional infrastructure, faculty digital literacy, and governance structures. This interaction, conceptualized in the present study through the ethical governance triangle (Figure 3), illustrates how institutional regulation, pedagogical supervision, and professional responsibility operate as interdependent determinants of sustainable AI integration. In this sense, the central challenge is not merely technological implementation, but ensuring that AI strengthens the clinical method while preserving reflective judgement and the human dimension of medical training.
Figure 3
Beyond structural governance considerations, the human-in-the-loop principle transforms the epistemological architecture of clinical training. Rather than positioning AI as an evaluative authority, it redefines the educator's role as an interpretative supervisor who contextualizes algorithmic outputs within experiential and relational dimensions of care. This shift reframes simulation not as a technological endpoint but as a preparatory stage within a layered developmental continuum, thereby expanding the analytical depth of AI integration beyond functional efficiency toward professional formation.
A key issue concerns the authentic assessment of physical examination skills. Global evidence emphasizes the need to move beyond perception-based evaluations and knowledge tests toward objective performance measurements, including Objective Structured Clinical Examination (OSCE) checklists, procedural rubrics, diagnostic reasoning matrices, and indicators of skill transfer to real clinical settings (Feigerlova et al., 2025; Yokose et al., 2025). Curricular syntheses further recommend integrating AI competence modules—such as data literacy, critical interpretation of algorithmic outputs, safety awareness, and ethical reasoning—directly into practical stations and post-simulation debriefings (Linares et al., 2023; Tolentino et al., 2024).
Regional production reports advances in simulation-based education and formative feedback; however, empirical studies correlating structured exposure to AI tools with objectively verified improvements in bedside performance remain limited. Strengthening this line of inquiry would benefit from multicentric collaboration and the use of shared evaluation protocols to enhance methodological comparability.
The ethical–governance dimension operates as a legitimizing framework for AI adoption. BEME guide no. 84 calls for explicit ethical frameworks and defined competencies, while implementation studies highlight transparency, traceability of AI use, and systematic human supervision (Masters, 2019; Gordon et al., 2024; Roveta et al., 2025; Memarian and Doleck, 2024). Regional publications similarly describe institutional declarations and responsible-use guidelines, alongside concerns regarding excessive delegation of evaluative functions to algorithmic systems. This convergence suggests that AI-based educational initiatives should clearly specify supervisory roles, override mechanisms, and data governance criteria to ensure accountability and preservation of formative agency.
Beyond normative principles of transparency and supervision, operational complexities require explicit consideration. The integration of AI into physical examination teaching involves the processing of sensitive educational and potentially clinical data, which may be stored or analyzed through external digital platforms. This raises concerns regarding data sovereignty, cross-border data storage, and compliance with national privacy regulations, particularly in countries with heterogeneous regulatory maturity (World Health Organization, 2021; United Nations Educational, Scientific and Cultural Organization, 2021).
In addition, medico-legal responsibility must remain clearly defined. When AI-assisted systems provide formative feedback or influence procedural training, accountability cannot be attributed to algorithmic outputs but must remain under the supervision of educators and institutional governance structures. Algorithmic auditability—understood as the capacity to trace system updates, training datasets, and decision pathways—constitutes a necessary safeguard to ensure ethical and transparent implementation. Without structured mechanisms addressing these operational dimensions, ethical governance risks remaining declarative rather than functionally embedded within medical education practice.
Regarding effectiveness, recent meta-analyses on generative AI in education report moderate improvements compared with traditional instructional strategies, although with considerable heterogeneity and risk of bias (Li et al., 2025; Pham et al., 2025). These findings support a cautious interpretation: generative AI appears particularly useful for formative feedback, structured case generation, and scaffolding of clinical reasoning, yet its integration should be guided by explicit operational objectives and accompanied by objective assessment of practical performance [e.g., OSCE, Mini Clinical Evaluation Exercise (mini-CEX)].
It is important to acknowledge that some controlled studies report statistically significant improvements in objective performance metrics, including diagnostic accuracy and procedural sequencing, when AI-assisted tools are embedded within structured curricula. These findings suggest that, under defined pedagogical conditions, AI may contribute not only to formative support but also to measurable gains in technical competence.
At the same time, emerging empirical evidence indicates that AI-assisted educational tools may produce measurable improvements in objective skill acquisition when embedded within structured curricula. Meta-analytic findings report moderate gains in diagnostic accuracy, procedural sequencing, and feedback efficiency compared with traditional instructional approaches (Li et al., 2025). In resource-constrained settings, AI-enabled simulation environments may also offer cost-efficiency advantages by reducing reliance on high-cost physical laboratories and enabling repeated deliberate practice through scalable digital platforms. A balanced interpretation therefore recognizes that AI integration may expand access to structured clinical training while simultaneously requiring robust governance mechanisms to preserve ethical oversight and pedagogical integrity.
Despite the capacity of AI-enhanced simulation to reproduce physical examination maneuvers and improve procedural precision, a persistent distinction remains between technical execution and interpretative clinical judgement. Evidence indicates that simulation effectively strengthens motor and communicative competencies (Elendu et al., 2024); however, it does not replicate the phenomenological and relational dimensions inherent to real patient encounters (Sun et al., 2024). In this context, simulation constitutes a foundational pedagogical strategy that may be enhanced—but not replaced—by AI-driven feedback and learning analytics.
This distinction is not merely procedural but epistemological. Technical simulation enhances motor coordination, sequencing accuracy, and exposure to controlled variability; however, clinical judgement emerges from interpretative synthesis, contextual reasoning, and relational awareness developed within authentic patient encounters. The educational value of AI-assisted simulation therefore lies in its preparatory and scaffolding function rather than in the replacement of experiential learning. Recognizing this layered progression clarifies that simulation and governance mechanisms are not repetitive themes, but structurally interconnected stages within a developmental continuum of professional competence.
Beyond predominantly technical integration frameworks, the MED-IA Model proposes a structured synthesis that situates digital competence within a humanistic and reflective paradigm, offering a regional pedagogical perspective on the balance between technological innovation, empathy, and clinical reasoning. The distinction between technical skill and clinical judgement is represented in the integrative MED-IA ladder (Figure 4), which illustrates the progressive evolution of medical learning from execution to reflective integration.
Figure 4
The following conceptual synthesis represents the authors' interpretative contribution derived from the analyzed evidence and does not claim empirical validation beyond the reviewed corpus. Although the MED-IA Model builds upon established frameworks in competency-based medical education, simulation pedagogy, and human-in-the-loop supervision, its distinctive contribution lies in integrating these dimensions into a unified and progressive structure centered specifically on physical examination teaching within the Latin American context. In contrast to existing models that predominantly emphasize technical integration or digital literacy competencies, MED-IA conceptualizes the transition from procedural execution to experiential relational competence and ultimately to reflective clinical judgement as a pedagogically sequenced developmental continuum. This integration foregrounds the ethical–humanistic dimension not as an accessory element but as the culminating stage of clinical competence development, thereby offering a context-sensitive synthesis that bridges technological scaffolding with the phenomenological core of the doctor–patient encounter.
Although the MED-IA Model is grounded in established pedagogical and governance principles, its implementation has not yet been empirically validated through pilot studies or feasibility assessments. Future research should therefore focus on testing its applicability across diverse institutional settings within Latin America to evaluate its operational impact and educational outcomes.
At its foundational level, AI functions as a mediator of technical skill acquisition, enabling procedural precision and structured feedback through simulation-based environments. At an intermediate level, supervised interaction with real patients consolidates diagnostic reasoning, empathy, and ethical awareness, reinforcing clinical judgement as the core of professional formation. This stage reflects the bidirectional nature of the clinical encounter, in which technical interpretation coexists with relational trust and therapeutic communication. Finally, the integrative level articulates technological and human dimensions into a professional synthesis that promotes autonomy, critical thinking, and moral responsibility. Within this framework, AI is conceptualized as an enabling instrument that supports—but does not substitute—the experiential and relational foundations of medical education.
4 Conclusions
AI represents a valuable supportive tool in the teaching of physical examination by enhancing technical precision, facilitating formative feedback, and supporting the analysis of student performance. Nevertheless, its educational contribution remains complementary, as AI cannot replace clinical judgement or the human interaction that defines the medical act and the trust-based patient–physician relationship. Further empirical and multicentric research is therefore required to evaluate the measurable impact of AI-assisted training on clinical performance, ethical decision-making, and patient communication.
In the Latin American context, scientific production on AI in medical education remains limited and heterogeneous, characterized primarily by isolated experiences focused on simulation and digital tutoring. This underscores the need for robust comparative and multicentric studies capable of systematically assessing the influence of AI on the development of clinical reasoning, bedside competence, and communication skills.
The integration of AI into medical curricula demands sustained institutional commitment, combining digital literacy, professional ethics, structured supervision, and transparent governance frameworks. Within this perspective, the integrative MED-IA Model conceptualizes clinical learning across technical, experiential, and reflective dimensions, illustrating how technology—when guided by pedagogical and ethical principles—can contribute to the formation of reflective, empathetic, and competent physicians in the digital era.
Statements
Author contributions
AT: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Supervision, Validation, Writing – original draft, Writing – review & editing. PG: Data curation, Formal analysis, Investigation, Methodology, Supervision, Validation, Writing – original draft, Writing – review & editing. MF: Data curation, Formal analysis, Investigation, Methodology, Supervision, Validation, Writing – original draft, Writing – review & editing. GT: Data curation, Formal analysis, Investigation, Methodology, Supervision, Validation, Writing – original draft, Writing – review & editing.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was used in the creation of this manuscript. During the writing and linguistic revision phase, AI-assisted support (ChatGPT, GPT-5 model, OpenAI) was used exclusively for syntactic refinement, stylistic improvement, and discursive coherence, under the direct supervision of the authors. Its use adhered to international standards of scientific integrity and editorial transparency established by the Committee on Publication Ethics (COPE) and the International Committee of Medical Journal Editors (ICMJE) (Committee on Publication Ethics, 2023; International Committee of Medical Journal Editors, 2025). The tool did not participate in data analysis, interpretation of findings, or the formulation of conclusions, thereby preserving full intellectual authorship and ethical responsibility.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
Aguilar-BucheliA. D.Borja-EspinozaM. A.Cadena-VargasE. F.Rojas-SalazarP. (2023). Artificial intelligence in medical education: latin American context. MetroCiencia31, 45–59. doi: 10.47464/MetroCiencia/vol31/2/2023/21-34
2
Avello-SáezA.Lucero-GonzálezM.VillagránS. (2024). Desarrollo de una declaración de uso de inteligencia artificial con una perspectiva de integridad académica en educacióN MéDica y Ciencias De la Salud. Rev. Med. Clin. Las Condes35, 412–420. doi: 10.1016/j.rmclc.2024.06.003
3
Ávila RuedaE.Bravo FloresB.Espinoza GuamánP. (2024). Inteligencia artificial en la educación médica de pregrado: avances, ventajas y desafíos. Polo Conoc.9, 1631–1647.
4
Chávez MostajoN. I. (2026). MED-IA: Modelo educativo digital para la formación médica de especialidades clínicas, basado en inteligencia artificial. Educ. Méd.27:101116. doi: 10.1016/j.edumed.2025.101116
5
Committee on Publication Ethics. (2023). Authorship and AI Tools: COPE Position Statement and Guidance on the Use of artificial Intelligence Tools in Publications. Available online at: https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools (Accessed February 3, 2026).
6
Corzo-ZavaletaJ.Navarro-CastilloY.Ugaz-RiveroM. (2025). Uso de la inteligencia artificial en la educación universitaria: Exploración bibliométrica. Desde el Sur17:e0010. doi: 10.21142/DES-1701-2025-0010
7
Díaz-GuioD. A.Infante-VillagránV. A.Ángel-DíazC.MontesD.Díaz-GómezA. S.PantojaA.et al. (2026). Inteligencia artificial en la educación basada en simulación clínica en América Latina: un estudio transversal de los conocimientos, prácticas y percepciones de los educadores. Educ. Méd.27:101172. doi: 10.1016/j.edumed.2026.101172
8
ElenduC.AmaechiD. C.OkattaA. U.AmaechiE. C.ElenduT. C.EzehC. P.et al. (2024). The impact of simulation-based training in medical education: a review. Medicine103:e38813. doi: 10.1097/MD.0000000000038813
9
FeigerlovaE.HaniH.Hothersall-DaviesE. (2025). A systematic review of the impact of artificial intelligence on educational outcomes in health professions education. BMC Med. Educ. 25. doi: 10.1186/s12909-025-06719-5
10
GordonM.SinghS.PatelN. (2024). BEME guide no. 84: artificial intelligence in health professions education. Med. Teach.46, 521–545. doi: 10.1080/0142159X.2024.2314198
11
Gutiérrez-CirlosC.NavarroG.CastañedaM. (2023). Uso de ChatGPT en docencia médica: ventajas y dilemas éticos. Educ. Méd. Tecnol.9, 71–85. doi: 10.24875/GMM.230001671
12
International Committee of Medical Journal Editors (2025). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Available online at: https://www.icmje.org/recommendations/ (Accessed February 5, 2026).
13
LiX.YanX.LaiH. (2025). The ethical challenges in the integration of artificial intelligence and large language models in medical education: A scoping review. Metwally AHS, editor. PLOS One. 20:e0333411. doi: 10.1371/journal.pone.0333411
14
LinaresJ. J. G.FuentesM. C. P.GaldamesI. S. (2023). Aprovechando el Potencial de la Inteligencia Artificial en la Educación: Equilibrando Beneficios y Riesgos. Eur. Journal. Educ. Psychol. 1–8. doi: 10.32457/ejep.v16i1.2205
15
Marín GonzálezD.Prampen RojasM. E.Paumier DuránA. G. (2025). Inteligencia Artificial y Educación Médica: Análisis bibliométrico. Rev. Conrado21:e4666. Available online at: https://conrado.ucf.edu.cu/index.php/conrado/article/view/4666
16
MastersK. (2019). Inteligencia artificial en la educación médica. Med. Teach.41, 976–980. doi: 10.1080/0142159X.2019.1595557
17
MayolJ. (2023). Inteligencia artificial generativa y educación médica. Educ. Méd.24:100851. doi: 10.1016/j.edumed.2023.100851
18
MemarianB.DoleckT. (2024). Human-in-the-loop in artificial intelligence in education: a review and entity-relationship (ER) analysis. Comput. Hum. Behav.2:100053. doi: 10.1016/j.chbah.2024.100053
19
MirM. M.MirG. M.RainaN. T.et al. (2023). Application of artificial intelligence in medical education: current scenario and future perspectives. J. Adv. Med. Educ. Prof. 11, 133–140. doi: 10.30476/JAMP.2023.98655.1803
20
NevesC.OliveiraT.Cruz-JesusF.VenkateshV. (2025). Extending the unified theory of acceptance and use of technology for sustainable technologies context. Int. J. Inf. Manage. (2025) 80:102838. doi: 10.1016/j.ijinfomgt.2024.102838
21
Ortiz-VilchisC. M.Castillo-ReyesI. S. (2025). Uso de Inteligencia Artificial en la Atención de Primer Contacto y Medicina Familiar: Un Metaanálisis sobre Casos Médicos Complejos. medRxiv 1–9. doi: 10.31219/osf.io/efy4d_v2
22
PhamT. D.KarunaratneN.ExintarisB.LiuD.LayT.YurievE.et al. (2025). The impact of generative AI on health professional education: a systematic review in the context of student learning. Med. Educ.59, 1280–1289. doi: 10.1111/medu.15746
23
RamírezD.SomozaG. A.OlivaresE.MaríaM.GabrielaA. (2025). Avances en el uso de inteligencia artificial en la educación médica latinoamericana. ALERTA Revista Científica del Instituto Nacional de Salud.8, 88–95. doi: 10.5377/alerta.v8i1.19194
24
Rognoni AmreinG.Benet BertranP.BertranP.Castro-SalomóA.CastroA.Gomar SanchoC.et al. (2024). La simulación clínica en la educación médica: Ventajas e inconvenientes del aprendizaje al lado del paciente y en entorno simulado. Med. Clin. Pract.7:100459. doi: 10.1016/j.mcpsp.2024.100459
25
RovetaA.CastelloL. M.MassarinoC.FranceseA.UgoF.MaconiA. (2025). Artificial intelligence in medical education: a narrative review on implementation, evaluation, and methodological challenges. AI6:227. doi: 10.3390/ai6090227
26
SunW.JiangX.DongX.YuG.FengZ.ShuaiL. (2024). The evolution of simulation-based medical education research: from traditional to virtual simulations. Heliyon10:e35627. doi: 10.1016/j.heliyon.2024.e35627
27
TolentinoR.BaradaranA.GoreG.PluyeP.Abbasgholizadeh-RahimiS. (2024). Curriculum frameworks and educational programs in AI for medical students, residents, and practicing physicians: scoping review. JMIR Med. Educ.10:e54793. doi: 10.2196/54793
28
TopolE. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nat. Med.25, 44–56. doi: 10.1038/s41591-018-0300-7
29
Torres SalinasC. (2025). Inteligencia artificial en la formación médica: riesgos y desafíos en estudiantes de medicina en Latinoamérica. Pediatría52, 5–6. doi: 10.31698/ped.52032025002
30
United Nations Educational Scientific and Cultural Organization. (2021). Recommendation on the Ethics of Artificial Intelligence. Available online at: https://unesdoc.unesco.org/ (Accessed February 7, 2026).
31
Wolff ReyesM.López StewartG. (2025). Inteligencia artificial: impacto en la formación profesional y en el ejercicio de la medicina. Bol. Acad. Chil. Med.61, 363–370. doi: 10.69700/wf0fsa83
32
World Health Organization (2021). Ethics and Governance of Artificial Intelligence for Health. Available online at: https://www.who.int/ (Accessed February 10, 2026).
33
YokoseM.HirosawaT.SakamotoT.KawamuraR.SuzukiY.HaradaY.et al. (2025). The validity of generative artificial intelligence in evaluating medical students in objective structured clinical examination: experimental study. JMIR Form. Res.9:e79465. doi: 10.2196/79465
Summary
Keywords
artificial intelligence, clinical judgement, ethics, medical education, physical examination
Citation
Torres A, González P, Fors M and Trujillo G (2026) Artificial intelligence in physical examination teaching in Latin America: a critical narrative review and conceptual model proposal. Front. Comput. Sci. 8:1798475. doi: 10.3389/fcomp.2026.1798475
Received
03 February 2026
Revised
18 March 2026
Accepted
19 March 2026
Published
13 April 2026
Volume
8 - 2026
Edited by
Siyabonga Mhlongo, University of Johannesburg, South Africa
Reviewed by
Vikesh Agrawal, Netaji Subhash Chandra Bose Medical College, India
Jordan Perchik, University of Alabama at Birmingham, United States
Updates
Copyright
© 2026 Torres, González, Fors and Trujillo.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Martha Fors, martha.fors@udla.edu.ec
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.