ORIGINAL RESEARCH article

Front. Educ., 08 May 2026

Sec. Assessment, Testing and Applied Measurement

Volume 11 - 2026 | https://doi.org/10.3389/feduc.2026.1730207

Generative AI in the Peruvian Amazon: a qualitative study on university teachers’ perceptions after a training workshop

  • 1. Cesar Vallejo University, Herrera, Peru

  • 2. Institute for the Future of Education, Tecnologico de Monterrey, Monterrey, Mexico

Abstract

Generative Artificial Intelligence (GenAI) is reshaping higher education by enabling personalized learning, automating assessment processes, and enhancing student support. In regions such as the Peruvian Amazon, the integration of GenAI presents both opportunities and significant contextual challenges due to disparities in infrastructure, digital skills, and pedagogical training. However, there is a lack of qualitative studies exploring how university teachers in marginalized territories perceive the pedagogical implications of GenAI, especially after receiving targeted training. This study aimed to explore the perceptions of university professors from a public Amazonian institution regarding the impact, challenges, and educational opportunities of GenAI following a professional development workshop. A qualitative approach was used, analyzing the transcriptions of 35 video responses using IRaMuTeQ (0.8 alpha 7), an R-based textual analysis software. Results showed that teachers associated GenAI with enhancing academic rigor and student-centered practices, but also expressed concerns about ethical use and technological dependence. Key challenges included limited access to infrastructure, the need for pedagogical training, and digital equity in rural contexts. The analysis also revealed that teachers valued hands-on training and proposed contextually grounded strategies to integrate GenAI in their classrooms. In conclusion: a) GenAI is perceived as a pedagogically rather than a threat, b) ethical, technical, and contextual factors shape its adoption, c) professional development is essential to bridge knowledge gaps, and d) the Amazonian perspective offers unique insights into inclusive and sustainable AI integration in higher education.

1 Introduction

The emergence of generative artificial intelligence (GenAI) is reshaping the foundations of higher education on a global scale. This technology enables unprecedented automation of academic tasks and fosters more adaptive, student-centered learning environments (Akavova et al., 2023; Lee and Moore, 2024). In low-resource contexts, these capabilities can also enhance teaching efficiency by reducing administrative workload. Moreover, its integration into pedagogical practices has accelerated the shift toward more open, interactive, and personalized educational models (Sajja et al., 2023; Tang, 2024). In light of this global transformation, it is crucial to examine the challenges and opportunities GenAI presents, particularly within understudied contexts such as the Amazonian regions.

Despite GenAI's progress, significant knowledge gaps remain regarding its implementation in peripheral regions such as the Peruvian Amazon. Particularly, there is limited qualitative evidence on how teachers in these contexts interpret and integrate GenAI into practice. Limited infrastructure, unstable connectivity, and scarce teacher training opportunities are persistent barriers (Anaya Figueroa et al., 2021), which can constrain meaningful adoption by limiting access, promoting fragmented or superficial use of GenAI tools, and reinforcing existing digital inequalities in educational practice. Moreover, most existing research has adopted quantitative approaches, resulting in a limited qualitative understanding of how teachers in these regions perceive and integrate emerging technologies into their daily practice (Davis, 2024; Hrastinski et al., 2019; McGrath et al., 2023). This research addresses that gap by providing empirical evidence from an educational context often excluded from global technological discussions. More specifically, the study aims to analyze: (a) perceptions of GenAI's impact, (b) implementation challenges, (c) strategies and recommendations for its use, and (d) the role of training in shaping pedagogical integration.

GenAI has proven to be a versatile tool in university teaching, with applications ranging from automatic content generation to the personalization of learning pathways. Its capacity to provide immediate feedback, suggest tailored exercises, and synthesize complex information has been documented across multiple disciplines (Dai and Ke, 2022; Hutson et al., 2022; Stafie et al., 2023). Large language models (LLMs), such as ChatGPT, have been swiftly adopted in educational settings as assistants for writing, research, and coding tasks (Peláez-Sánchez et al., 2024; Wang et al., 2024; Xiao et al., 2023). These functionalities open new possibilities for educators aiming to innovate in their pedagogical practices.

The enthusiasm surrounding GenAI should not obscure the ethical concerns it raises in educational contexts. Algorithmic opacity, data bias, and threats to privacy are critical issues that demand regulation and ongoing reflection (Williamson and Prybutok, 2024; Zhou et al., 2022). Additionally, there is a risk of fostering technological dependency that may undermine critical thinking or promote superficial learning (Bai et al., 2023; Shibani et al., 2024). Recent research has highlighted the urgency of fostering an ethical culture in the academic use of GenAI, especially in settings with unequal access conditions (Bozkurt, 2024; Song, 2024; Velarde-Camaqui et al., 2025). This ethical lens must accompany any technological integration process.

The effective integration of GenAI in higher education requires strong and sustained teacher training. Without adequate support, its use is likely to be limited to basic functions, leaving its pedagogical potential untapped (Salinas-Navarro et al., 2024; Yim et al., 2024). Critical training programs focused on developing digital and ethical competencies are essential to ensure meaningful and sustainable adoption of these technologies (Santana and Díaz-Fernández, 2023; Squires et al., 2023). In response to this need, the present study examines a five-day professional development workshop focused on pedagogical planning, classroom implementation, and assessment using GenAI tools.

This study explores the perceptions of university professors from the Peruvian Amazon regarding the impact, challenges, and opportunities of GenAI after participating in a training workshop. Through the analysis of their video-recorded reflections, the research aims to understand how this technology is interpreted and envisioned from a context historically lagging in terms of educational innovation. Accordingly, this study is guided by the following research question: How do university professors in the Peruvian Amazon perceive and appropriate GenAI after participating in a targeted training workshop?

2 Method

2.1 Methodological approach

This study adopts a qualitative descriptive design supported by lexicometric analysis. The objective was to examine how faculty members articulate perceptions, priorities, and constraints regarding the integration of generative AI within a geographically and infrastructurally constrained Amazonian context. Rather than adopting a fully interpretive paradigm, the study integrates computational textual structuring with contextual qualitative interpretation. The use of open-ended questions and data-driven lexical clustering aligns with recommendations for studying emerging phenomena in underexplored territories (Graebner et al., 2023; Khlaif et al., 2023). This design is particularly appropriate for examining emerging phenomena in underexplored contexts, where structured quantitative measures may overlook contextual meaning-making. The integration of lexicometric analysis with qualitative interpretation enables the identification of stable discursive patterns while preserving the situated perspectives of participants.

2.2 Participants

Data were collected in November 2024 during a five-day professional development workshop on GenAI conducted in Iquitos, Loreto, Peru, for faculty members at a public university located in the Amazon region. Iquitos was purposively selected as a case study due to its status as a geographically isolated urban center with persistent digital and infrastructural constraints, representing broader challenges faced by higher education institutions in the Peruvian Amazon and similar under-resourced regions. The workshop addressed three core domains: pedagogical planning, classroom implementation, and assessment practices supported by GenAI tools. A total of 65 professors attended the workshop. Of these, 37 voluntarily consented to participate in the interview component and produced recordings with sufficient audio quality for transcription. Recordings were excluded if (a) informed consent for recording was not granted or (b) audio quality prevented reliable verbatim transcription. The final corpus therefore comprised 35 participants.

Participants represented multiple departments within the Faculty of Education (including language and literature, educational sciences, mathematics/informatics, and related specializations). Teaching experience ranged from early-career faculty with fewer than five years of service to senior professors with more than 30 years of university teaching. All participants were affiliated with the same public institution and operated within a shared regional context characterized by infrastructural constraints, intermittent broadband connectivity, and limited access to updated academic databases—conditions typical of higher education institutions in the Peruvian Amazon.

2.3 Interview protocol

To ensure analytic transparency and procedural consistency, a semistructured interview protocol was developed prior to data collection. The protocol was aligned with the study objectives and designed to explore participants' perceptions of AI integration following the workshop intervention, covering macro-level (system impact), meso-level (institutional benefits and professional culture), and micro-level (classroom practice) dimensions.

The interview protocol included seven standardized open-ended prompts: (a) “¿Qué impacto crees que tendrá la IA en la educación superior?” (What impact do you believe AI will have on higher education?); (b) “¿Qué beneficio crees que introducirá la IA en la educación peruana?” (What benefits do you believe AI will introduce into Peruvian education?); (c) “¿Cómo crees que la IA podría transformar tu forma de enseñar en el aula?” (How do you think AI could transform your way of teaching in the classroom?); (d) “¿Qué desafíos crees que enfrentarán al implementar la IA?” (What challenges do you think will arise when implementing AI?); (e) “¿Cómo planeas utilizar la IA para mejorar la experiencia de aprendizaje en tus estudiantes?” (How do you plan to use AI to improve your students' learning experience?); (f) “¿Qué le recomendaría a su colega que está considerando aprender sobre la IA para la enseñanza?” (What would you recommend to a colleague who is considering learning about AI for teaching?); and (g) “¿Qué información del taller te resultó más útil para tu práctica pedagógica?” (What information from the workshop was most useful for your pedagogical practice?).

The prompts were drafted by the research team based on the study aims and refined to ensure clarity and neutrality; they were then piloted informally with two faculty members not included in the final corpus to verify comprehension and timing

All participants responded to the same set of prompts, ensuring comparability across cases and reducing interviewer-induced variability. The prompts were intentionally phrased in neutral terms to allow participants to articulate both opportunities and concerns regarding AI integration.

2.4 Data collection

Interviews were conducted following the completion of the AI workshop. Participants attended a designated office at the university, where a trained research assistant facilitated the session. Each participant responded orally to the standardized prompts while being audio-recorded using a digital recorder. The use of recorded responses enabled the capture of richer, more natural reflections compared to highly structured interviews, preserving participants' spontaneous discourse while reducing interviewer intervention (Raingruber, 2003). To ensure procedural consistency across sessions, the research assistant read each question verbatim and limited intervention to procedural clarification when necessary. No additional probing questions were introduced beyond the predefined protocol.

Interview duration ranged from approximately 8 to 15 min, depending on the depth of participants' responses. The same seven prompts were administered verbatim to all participants; differences in response length reflected individual elaboration rather than variation in questioning. This structured, face-to-face format ensured comparability across cases while allowing participants to elaborate freely on their perspectives. All recordings were transcribed verbatim in Spanish, preserving participants' original wording. Transcripts were anonymized using alphanumeric identification codes. No thematic categories or interpretive labels were assigned prior to analysis; transcripts were organized exclusively by participant ID and prompt sequence to maintain analytic neutrality.

The final corpus comprised 16,973 words after transcription and preprocessing. Individual participant contributions ranged from 216 to 1,409 words (M ≈ 485), indicating variability in response length across participants. To mitigate potential bias associated with unequal text length, IRaMuTeQ's automatic segmentation into Elementary Context Units (ECUs) was used, ensuring proportional representation of discourse across the corpus during analysis. Variability in response length reflects participant elaboration rather than procedural differences.

2.5 Data analysis

The study adopted a qualitative descriptive design supported by lexicometric analysis using IRaMuTeQ (version 0.8 alpha 7), a software based on the R environment for statistical textual analysis (Camargo and Justo, 2013). The analytical approach combined computational segmentation with contextual qualitative interpretation. IRaMuTeQ was selected because it enables transparent and reproducible lexicometric procedures grounded in statistical clustering (Reinert method), while preserving access to contextualized textual segments. This combination allowed the study to balance computational rigor with qualitative interpretation, ensuring traceability between lexical classes and participants' original discourse. Only interviewer prompts were removed prior to analysis. No lexical cleaning, normalization, or paraphrasing was applied. All participant discourse was preserved.

Textual data were processed through automatic segmentation into elementary context units (ECUs), followed by Descending Hierarchical Classification (Reinert method), word similarity analysis, and co-occurrence mapping. These procedures enabled identification of stable lexical classes and discursive structures across the corpus.

Rather than predefining thematic categories, analysis proceeded from the full corpus, allowing lexical classes to emerge statistically from co-occurrence patterns. Where relevant, subsets of responses corresponding to specific prompts were examined to deepen contextual interpretation, but class formation itself was not constrained by a priori thematic coding.

Analytical interpretation was iterative. Computational outputs (e.g., dendrograms, similarity graphs, and contextual segments) were cross-checked against original transcripts to ensure coherence between statistical patterns and participant discourse. This procedure integrated quantitative textual structuring with qualitative reading of meaning.

Although IRaMuTeQ supports Spanish-language processing, the corpus was translated into English prior to analysis to ensure consistency across analytical outputs and facilitate interpretation within an international research context. Because lexicometric analysis is sensitive to lexical variation, particular attention was paid to preserving semantic equivalence during translation. Transcripts were reviewed manually after translation to ensure that participant intent and contextual meaning were maintained.

While translation may introduce some degree of lexical smoothing, the analysis focused on co-occurrence patterns and structural relationships between terms rather than isolated word frequencies. This reduces the impact of minor linguistic variations on the identification of stable lexical classes. English translations were used exclusively for computational processing, while interpretation of results was continuously cross-checked against the original Spanish transcripts. This procedure allowed analytical consistency while preserving alignment with the original language data.

Figure 1 illustrates the analytical workflow adopted in this study. Following dataset processing, responses were organized according to the four research objectives derived from the interview protocol: (1) impact discourse, (2) implementation challenges, (3) strategies and recommendations, and (4) transformative training. Within each objective-specific corpus, IRaMuTeQ techniques were selected according to analytical purpose. For instance, lexical cloud and word similarity analyses were used to visualize dominant discursive structures related to perceived impact; Descending Hierarchical Classification (CHD) was applied to identify statistically stable lexical classes in the challenges and training domains; and co-occurrence graphs supported contextual examination of strategic and recommendation-oriented discourse.

Figure 1

Importantly, while the corpus was segmented according to research objectives for structural clarity, lexical class formation within each segment remained data-driven. Statistical clustering was generated through Reinert's method based on word co-occurrence patterns rather than through predefined thematic coding. This alignment between research objectives and analytical procedures ensured methodological coherence while preserving transparency in the analytic process.

2.6 Ethical considerations

This study was reviewed and approved by the Institutional Ethics Committee of Universidad César Vallejo (June 4, 2024). All participants signed informed consent forms approved by the ethics committee of Universidad César Vallejo. Anonymity was ensured through coded identifiers, and all participation was voluntary. The study complies with ethical standards for research in higher education settings.

3 Results

3.1 Perceptions of impact: AI as a catalyst for educational transformation

To better understand how university professors from the Peruvian Amazon perceive the impact of artificial intelligence on higher education, a lexical and relational analysis was conducted based on their discourse. These visualizations not only highlight recurring themes but also offer a window into how faculty interpret, appropriate, and re-signify AI within their pedagogical practices.

The lexical cloud (Figure 2) reveals a concentrated discourse around the figure of the student, whose prominence reflects an educational perspective that places learners at the center of technological transformation. The coexistence of terms such as teacher, impact, education, learn, and AI suggests a discursive shift in which artificial intelligence is no longer conceived as a peripheral tool, but rather as an element integrated into the academic ecosystem. As one professor described, “we are facing a new paradigm… AI will give us tools to focus more on higher-order thinking skills such as critical thinking and problem solving.” The repeated presence of words like work, research, improve, and knowledge indicates that these professors—notably from public universities in the Amazon region—associate AI with academic rigor and professional development rather than merely technological novelty. For instance, one faculty member noted that AI would “make learning more personalized and automate administrative tasks… offering more tailored solutions to students’ individual needs.”

Figure 2

A closer look at the similarity graph (Figure 3) enables a deeper semantic interpretation. Terms are organized into clusters that reflect distinct, yet interconnected, dimensions of the perceived impact of AI in higher education. The orange cluster (e.g., teacher, train, support) emphasizes the role of the professor as an active agent in guiding students' adaptation to AI tools. The green and light blue clusters (learn, transform, educational, personalize) highlight a pedagogical vision of AI as a driver of more adaptive, student-centered learning experiences. Meanwhile, the red and violet clusters (intelligence, research, scientific, apply) underscore the potential of AI to enrich academic inquiry, streamline information processing, and support more sophisticated forms of knowledge construction.

Figure 3

These groupings are not merely thematic but reveal implicit tensions: the coexistence of tool and impact signals a tension between instrumental and transformative views of AI. Similarly, the presence of positive, benefit, and support alongside challenge, update, and properly indicates that while expectations are high, there is also awareness of the institutional and ethical conditions required for effective integration. It is worth noting that no terms explicitly related to resistance or technophobia emerged, which contrasts with common findings in broader national samples. This may reflect a distinctive openness among Amazonian faculty, possibly influenced by the pedagogical urgency generated by geographic isolation and resource constraints. As another participant emphasized, AI would “allow knowledge to flow to places that previously had limited access.” These tensions raise an important question: how can teacher training initiatives be designed to address both the instrumental and transformative dimensions of GenAI integration in resource-constrained contexts?

Overall, the visualizations confirm that university professors from this underrepresented region articulate a nuanced discourse about AI—one that combines enthusiasm with critical awareness. Their perspectives integrate educational equity, professional adaptation, and pedagogical innovation, challenging the notion that peripheral regions lag behind in technological reflection.

3.2 Structural barriers: technical, pedagogical, and contextual challenges

To address the second objective, a Descending Hierarchical Classification (CHD) was performed on responses elicited by the prompt concerning implementation challenges. The results, illustrated in Figure 4, reveal five distinct lexical classes that reflect a complex landscape of perceived barriers and requirements.

Figure 4

Class 1 (19.5%) groups words such as technologic, intelligence, educational, ethical, and challenge, pointing to the urgent need for technological training and ethical literacy. It reflects critical concern about whether teachers are prepared to handle the complexity of AI adoption. As another participant noted, “relying entirely on AI for student learning requires a lot of caution and attention.”

Class 2 (17.1%) is associated with issues of poor digital practices, represented by terms like copy, paste, and relate. This cluster suggests that some students and teachers engage with AI without a proper understanding of academic integrity or structured methodology. As one professor explained, “we need to design assessments that truly measure learning, knowing that students are using AI.”

Class 3 (26.8%), the most dominant class, emphasizes learn, tool, student, and classroom, suggesting that AI is seen as a complementary pedagogical resource, especially when used directly in student-centered environments.

Class 4 (19.5%) reflects themes around information management and responsibility, with terms like respect, read, and source, highlighting the teacher's role in mediating digital content and guiding proper use.

Class 5 (17.1%) focuses on infrastructure and institutional readiness, with terms like access, university, data, and technology, underlining disparities in connectivity and availability of platforms in remote contexts like the Amazon. For example, one faculty member stated, “at the university, we don't have internet access in classrooms, and both teachers and students rely on our phones.”

In sum, Figure 4 evidences that the perceived challenges are multidimensional, combining limitations in digital infrastructure with pedagogical, ethical, and cultural tensions. These findings reinforce the importance of contextualized AI training, especially in underserved regions where educational transformation necessitates not only access to technology but also its critical appropriation through sustainable and supportive infrastructures. Rather than indicating simple resistance, the discourse reveals a context in which technological aspiration coexists with structural fragility.

3.3 Strategic integration: uses and recommendations from faculty

To analyze the strategies and recommendations proposed by faculty members for integrating artificial intelligence into higher education, a co-occurrence network was generated based on responses to the prompts regarding use strategies and peer recommendations (Figure 5). This figure reveals the semantic relationships between key terms across these responses.

Figure 5

The proximity between recommend and teacher highlights the emphasis on guided implementation: participants consistently suggested promoting formal training and peer exchange rather than isolated experimentation. As one participant stated, “participating in AI workshops and learning communities is very helpful… it's necessary to stay open to experimenting with new tools.” In parallel, the connection between AI and tool or platform underlines the potential for creating personalized learning experiences and improving feedback and evaluation mechanisms. For instance, one faculty member described using AI to “automate formative assessments… to give more immediate and timely feedback.” This is supported by concordance data, where educators mention integrating platforms that allow for individual learning paths, interactive simulations, and skill development.

The presence of student and learn within dense clusters further reinforces the view of AI as a means to empower learners through autonomy and support, while teacher retains a non-negotiable role in guiding, interpreting, and critically assessing AI-generated outputs. As another professor emphasized, “students need your accompaniment… the teacher must lead the whole process.” These findings demonstrate that while faculty embrace AI, they do so with an awareness of the pedagogical, ethical, and contextual considerations necessary for sustainable integration.

The graph shows clusters of terms frequently associated with “AI” revealing patterns of discourse around implementation strategies, pedagogical roles, and perceptions of technology-enhanced learning. Rather than advocating for automation, the discourse reflects a mediated integration model in which AI functions as a scaffold under sustained pedagogical supervision. Importantly, many of these strategies appear feasible even in low-resource contexts, as they rely on accessible tools and pedagogical adjustments rather than advanced technological infrastructure.

3.4 Transformative training: the role of workshops in teacher empowerment

To explore how training workshops contribute to teacher empowerment and pedagogical transformation, we conducted a CHD analysis of the taller subcorpus. The resulting dendrogram (Figure 6) reveals five distinct lexical classes, each reflecting a unique dimension of the workshop experience.

Figure 6

Class 1 (26.1%) emphasizes hands-on application, with frequent terms such as practical, improve, education, apply, update, and prompt. This suggests a strong appreciation for the workshop's pragmatic focus, where educators learned to directly implement AI tools like ChatGPT through structured activities that simulate real teaching scenarios.

Class 2 centers on sharing and giving, highlighting the social dynamics of the sessions. Expressions such as give, learn, share, and time reflect the collaborative learning environment, where peer interaction enabled mutual support and experience exchange especially crucial for participants from underserved regions such as the Amazon.

Class 3 revolves around research, platform, and build, indicating that the training encouraged educators to go beyond mere tool use and explore the design of pedagogical strategies and digital environments for student learning. Notably, terms like technology, explore, and perplexity suggest growing interest in platforms that integrate reliable sources and critical thinking.

Class 4 focuses on the tool ChatGPT and its practical exploration, where words such as task, interest, and Hans (referring to the workshop leader) signal the relevance of guided practice in reducing resistance and clarifying use cases for the classroom.

Class 5, finally, reflects higher-order reflection, with terms like ai, understand, complement, and teacher, suggesting that participants recognize AI as a valuable, though not replacement, component of professional teaching. The presence of search and thing also signals exploratory attitudes still evolving.

Together, these results confirm that the workshops were not only informative but also transformative: participants moved from passive exposure to active exploration and conceptual integration of AI. The learning was both practical and reflective, empowering educators with tools and confidence to engage critically with emerging technologies. These classes can be interpreted as reflecting key learning outcomes of the workshop, including the development of practical AI skills, collaborative learning, pedagogical design capacities, guided tool use, and reflective integration of technology into teaching practice.

The analysis reveals five clusters related to practical use, collaborative learning, pedagogical design, guided tool exploration, and reflective integration of AI into teaching practice.

The lexical and structural analyses conducted across the four objectives revealed how faculty members from the Peruvian Amazon perceive, appropriate, and integrate artificial intelligence into higher education. The findings reflect a nuanced understanding of both potential and risk, shaped by regional challenges and contextual constraints. On one hand, participants emphasized the transformative and practical value of AI for improving teaching and learning. On the other, they voiced concerns regarding infrastructure gaps, teacher preparedness, and ethical considerations.

While the impact and utility of AI were acknowledged through terms such as student, teacher, tool, and recommend, participants also highlighted the importance of pedagogical accompaniment and continuous training. The workshop sessions, in particular, served as empowering spaces for exploration, demystification, and collective construction of knowledge. These results underscore the need for sustainable, context-aware strategies that not only introduce technological tools but also strengthen digital pedagogical competencies.

In sum, the results provide grounded insights into the lived experiences and aspirations of educators navigating the postdigital turn in regions often marginalized from technological innovation. These voices offer valuable contributions to the global conversation on AI in education and set the stage for the following discussion.

4 Discussion

GenAI is not displacing educators but repositioning them as key mediators in digital transformation contexts. In the similarity graph (Figure 3), terms like support, train, guide, and student show how faculty perceive themselves as active facilitators of change. This perspective aligns with the proposal that GenAI has the capacity to enhance teachers’ capabilities for an improved use of technology for interaction, feedback and personalization of learning (Akavova et al., 2023; Sajja et al., 2023; Tang, 2024). In the context of GenAI, teachers do not lose their relevance; rather, they redefine their role through a formative and ethically grounded perspective. At an institutional level, this repositioning requires structured support through professional development programs, clear guidelines for AI integration, and policies that promote ethical and pedagogically grounded use of these technologies.

The Amazonian experience challenges the narrative of digital lag in peripheral regions. In the lexical cloud (Figure 2), terms such as impact, improve, knowledge, and research suggest an optimistic academic stance toward GenAI. This reflects a commitment to technological equity in structurally limited contexts (Anaya Figueroa et al., 2021). From the periphery, these educators are not just adopting technology, they are reinterpreting it with a critical and transformative outlook. Unlike studies conducted in urban, high-connectivity environments, these findings emerge from a region characterized by unstable internet access, limited institutional infrastructure, and geographic isolation. The prominence of terms related to research, improvement, and equity suggests that GenAI is being interpreted not as technological luxury but as compensatory infrastructure, a tool to mitigate structural educational gaps rather than simply enhance already robust systems.

The risks of uncritical automation demand ethical reflection on the use of GenAI in higher education. Results reveal teachers' concerns about technological dependence, data quality, and the loss of autonomous thinking (see Figure 4). These tensions echo broader ethical debates (Velarde-Camaqui et al., 2025) on algorithmic bias and responsible data use (Bai et al., 2023; Shibani et al., 2024). Integrating GenAI into teaching requires strong ethical frameworks to guide its pedagogical application. In low-connectivity contexts, ethical concerns are compounded by infrastructural fragility: when access is scarce, dependence risks becoming asymmetrical, and technological mediation may concentrate rather than democratize knowledge production. In practical terms, this may involve the incorporation of ethical guidelines for AI use, the development of assessment rubrics that account for AI-assisted work, and the inclusion of ethics-focused modules within teacher training programs.

The integration of GenAI by educators extends beyond functional automation, evidencing alignment with purposeful pedagogical strategies. In the co-occurrence analysis (Figure 5), terms like use, student, class, prompt, recommendation, and learn suggest a critical integration. These practices reflect an instrumental understanding of the technology that reinforces teacher autonomy (Wang et al., 2024; Xiao et al., 2023). GenAI's potential is fully activated when teachers decide how, when, and for what purpose to use it.

Contextual analyses show that educators not only use GenAI but also reflect on its pedagogical meaning. Multiple statements emphasize that AI should guide—not replace—the educational process (see Figure 5 contexts). This view aligns with the proposal of human-centered AI that respects teacher agency (Akavova et al., 2023; Lee and Moore, 2024). Teaching with GenAI also means teaching about it, and doing so critically.

Teacher training in GenAI is also a strategy to reduce digital gaps in vulnerable contexts. In structurally marginalized territories, GenAI training becomes a form of epistemic empowerment. Rather than simply improving efficiency, it enables faculty to participate in global academic conversations from positions historically constrained by geographic and infrastructural barriers. Future research should also incorporate longitudinal designs to assess the sustained impact of training interventions on teaching practices and student outcomes over time.

This study sheds light on how university faculty from the Peruvian Amazon are navigating the promises and tensions of generative artificial intelligence in higher education. The discussion highlights four key contributions: a) the repositioning of teachers as pedagogical mediators rather than passive users of technology; b) the coexistence of enthusiasm and critical caution when integrating GenAI in under-resourced contexts; c) the proactive and situated appropriation of GenAI tools following targeted training interventions; and d) the strategic potential of these technologies to address long-standing equity gaps, provided they are embedded in meaningful pedagogical frameworks. These findings invite us to rethink innovation not as a top-down technological imposition, but as a culturally and contextually grounded process of educational transformation.

The integration of GenAI in teacher education requires a multifaceted and human-centered approach. Based on the evidence presented, four concluding insights can be drawn: a) GenAI fosters pedagogical innovation when aligned with teachers' reflective practice; b) ethical and infrastructural concerns must be addressed early to avoid superficial or harmful adoption; c) context-sensitive training empowers faculty in marginalized regions to engage critically with emerging technologies; and d) co-construction of GenAI use between students and teachers enhances agency, relevance, and sustainability. These insights reinforce the notion that inclusive technological adoption is inseparable from pedagogical ethics and institutional commitment.

This study is exploratory in nature and relies on a single qualitative data source from a specific geographic and institutional context, which may limit the generalizability of findings. The use of videos instead of live interviews, while appropriate for the context, constrained follow-up probing and may have influenced participants' depth of reflection. Future research should expand to multi-site designs, explore longitudinal impacts of GenAI training, and include triangulation with student perspectives. Additionally, it would be valuable to assess how institutional policies and curricular frameworks mediate the integration of GenAI in higher education settings.

Statements

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by Comite de Ética - Universidad Cesar Vallejo. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author contributions

DV-C: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. IP-S: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. HM-G: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft.

Funding

The author(s) declared that financial support was received for this work and/or its publication. The authors gratefully acknowledge the financial support provided by the research office of Cesar Vallejo University.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was used in the creation of this manuscript. GenAI was used to create certain figures.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • 1

    AkavovaA.TemirkhanovaZ.LorsanovaZ. (2023). Adaptive learning and artificial intelligence in the educational space. E3S Web. Conf.451, 6011. 10.1051/e3sconf/202345106011

  • 2

    Anaya FigueroaT.Montalvo CastroJ.CalderónA. I.Arispe AlburquequeC. (2021). Escuelas rurales en el perú: factores que acentúan las brechas digitales en tiempos de pandemia (COVID- 19) y recomendaciones para reducirlas. Educación30 (58), 1133. 10.18800/educacion.202101.001

  • 3

    BaiL.LiuX.SuJ. (2023). ChatGPT: the cognitive effects on learning and memory. Brain-X1 (3), e30. 10.1002/brx2.30

  • 4

    BozkurtA. (2024). GenAI et al.: cocreation, authorship, ownership, academic ethics and integrity in a time of generative AI. Open Praxis16 (1), 110. 10.55982/openpraxis.16.1.654

  • 5

    CamargoB. V.JustoA. M. (2013). IRAMUTEQ: um software gratuito para análise de dados textuais. Temas Psicol.21 (2), 513518. 10.9788/TP2013.2-16

  • 6

    DaiC.-P.KeF. (2022). Educational applications of artificial intelligence in simulation-based learning: a systematic mapping review. Comput. Educ.: Artif. Intell.3, 100087. 10.1016/j.caeai.2022.100087

  • 7

    DavisR. (2024). Korean in-service teachers’ perceptions of implementing artificial intelligence (AI) education for teaching in schools and their AI teacher training programs. Int. J. Inf. Educ. Technol.14 (2), 214219. 10.18178/ijiet.2024.14.2.2042

  • 8

    GraebnerM. E.KnottA. M.LiebermanM. B.MitchellW. (2023). Empirical inquiry without hypotheses: a question-driven, phenomenon-based approach to strategic management research. Strateg. Manag. J.44 (1), 310. 10.1002/smj.3393

  • 9

    HrastinskiS.OlofssonA. D.ArkenbackC.EkströmS.EricssonE.FranssonG.et al (2019). Critical imaginaries and reflections on artificial intelligence and robots in postdigital K-12 education. Postdigit. Sci. Educ.1 (2), 427445. 10.1007/s42438-019-00046-x

  • 10

    HutsonJ.JeevanjeeT.GraafV. V.LivelyJ.WeberJ.WeirG.et al (2022). Artificial intelligence and the disruption of higher education: strategies for integrations across disciplines. Creat. Educ.13 (12), 39533980. 10.4236/ce.2022.1312253

  • 11

    KhlaifZ. N.SanmugamM.JomaA. I.OdehA.BarhamK. (2023). Factors influencing teacher’s technostress experienced in using emerging technology: a qualitative study. Technol. Knowl. Learn.28 (2), 865899. 10.1007/s10758-022-09607-9

  • 12

    LeeS. S.MooreR. L. (2024). Harnessing generative AI (GenAI) for automated feedback in higher education: a systematic review. Online Learn.28 (3), 10.24059/olj.v28i3.4593

  • 13

    McGrathC.Cerratto PargmanT.JuthN.PalmgrenP. J. (2023). University teachers’ perceptions of responsibility and artificial intelligence in higher education—an experimental philosophical study. Comput. Educ.: Artif. Intell.4, 100139. 10.1016/j.caeai.2023.100139

  • 14

    Peláez-SánchezI. C.Velarde-CamaquiD.Glasserman-MoralesL. D. (2024). The impact of large language models on higher education: exploring the connection between AI and education 4.0. Front. Educ.9, 1392091. 10.3389/feduc.2024.1392091

  • 15

    RaingruberB. (2003). Video-Cued narrative reflection: a research approach for articulating tacit, relational, and embodied understandings. Qual. Health. Res.13 (8), 11551169. 10.1177/1049732303253664

  • 16

    SajjaR.SermetY.CikmazM.CwiertnyD.DemirI. (2023). Artificial Intelligence-Enabled Intelligent Assistant for Personalized and Adaptive Learning in Higher Education (Version 1). arXiv. 10.48550/ARXIV.2309.10892

  • 17

    Salinas-NavarroD. E.Vilalta-PerdomoE.Michel-VillarrealR.MontesinosL. (2024). Using generative artificial intelligence tools to explain and enhance experiential learning for authentic assessment. Educ. Sci.14 (1), 1. 10.3390/educsci14010083

  • 18

    SantanaM.Díaz-FernándezM. (2023). Competencies for the artificial intelligence age: visualisation of the state of the art and future perspectives. Rev. Manag. Sci.17 (6), 19712004. 10.1007/s11846-022-00613-w

  • 19

    ShibaniA.KnightS.KittoK.KarunanayakeA.Buckingham ShumS. (2024). “Untangling critical interaction with AI in Students’ written assessment,” in Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 16. 10.1145/3613905.3651083

  • 20

    SongN. (2024). Higher education crisis: academic misconduct with generative AI. J. Contingencies Crisis Manag.32 (1), e12532. 10.1111/1468-5973.12532

  • 21

    SquiresE.BacchiS.MaddisonJ. (2023). We need to chat about artificial intelligence. Med. J. Aust.219 (8), 394394. 10.5694/mja2.52081

  • 22

    StafieC. S.SufaruI.-G.GhiciucC. M.StafieI.-I.SufaruE.-C.SolomonS. M.et al (2023). Exploring the intersection of artificial intelligence and clinical healthcare: a multidisciplinary review. Diagnostics13 (12), 1995. 10.3390/diagnostics13121995

  • 23

    TangK. H. D. (2024). Implications of artificial intelligence for teaching and learning. Acta Pedagog. Asiana3 (2), 6579. 10.53623/apga.v3i2.404

  • 24

    Velarde-CamaquiD.Denegri-VelardeM. I.Velarde-CamaquiK.Solis-TrujilloB. P. (2025). “Academic ethics in the age of artificial intelligence: a systematic mapping of the literature,” in Communication and Applied Technologies, eds. IbáñezD. B.Gallardo-EcheniqueE.SiringoringoH.DiezN. L.. Singapore: (Springer Nature), 349358. 10.1007/978-981-96-0426-5_30

  • 25

    WangS.XuT.LiH.ZhangC.LiangJ.TangJ.et al (2024). Large Language Models for Education: A Survey and Outlook (Version 2). arXiv. 10.48550/ARXIV.2403.18105

  • 26

    WilliamsonS. M.PrybutokV. (2024). Balancing privacy and progress: a review of privacy challenges, systemic oversight, and patient perceptions in AI-driven healthcare. Appl. Sci.14 (2), 675. 10.3390/app14020675

  • 27

    XiaoC.XuS. X.ZhangK.WangY.XiaL. (2023). “Evaluating Reading comprehension exercises generated by LLMs: a showcase of ChatGPT in education applications,” in Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), 610625. 10.18653/v1/2023.bea-1.52

  • 28

    YimD.KhuntiaJ.ParameswaranV.MeyersA. (2024). Preliminary evidence of the use of generative AI in health care clinical services: systematic narrative review. JMIR. Med. Inform.12 (1), e52073. 10.2196/52073

  • 29

    ZhouN.ZhangZ.NairV. N.SinghalH.ChenJ. (2022). Bias, fairness and accountability with artificial intelligence and machine learning algorithms. Int. Stat. Rev.90 (3), 468480. 10.1111/insr.12492

Summary

Keywords

Amazon region, artificial intelligence, educational innovation, generative AI, higher education, teacher, training workshop, AI literacy

Citation

Velarde-Camaqui D, Peláez-Sánchez IC and Mejía-Guerrero H (2026) Generative AI in the Peruvian Amazon: a qualitative study on university teachers’ perceptions after a training workshop. Front. Educ. 11:1730207. doi: 10.3389/feduc.2026.1730207

Received

22 October 2025

Revised

30 March 2026

Accepted

02 April 2026

Published

08 May 2026

Volume

11 - 2026

Edited by

May Portuguez-Castro, Centrum Catolica Graduate Business School, Peru

Reviewed by

Michele Jackson, Michigan State University, United States

Anna Vitalievna Sokolova Grinovievkaya, Autonomous Metropolitan University Xochimilco Campus, Mexico

Updates

Copyright

*Correspondence: Iris Cristina Peláez-Sánchez

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics