Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 10 September 2025

Sec. Digital Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1597249

GenAI as a cognitive mediator: a critical-constructivist inquiry into computational thinking in pre-university education

  • 1Escuela de Ingeniería y Ciencias, Tecnologico de Monterrey, San Luis Potosí, Mexico
  • 2Facultad de Ciencias, Universidad Autónoma de San Luis Potosí, San Luis Potosí, Mexico
  • 3Tecnologico de Monterrey, Writing Lab, TecLab, Vicerrectoría de Investigación y Transferencia de Tecnología, Monterrey, Mexico
  • 4Escuela de Humanidades y Educación, Tecnologico de Monterrey, Monterrey, Mexico

This qualitative study investigates how high school students engage with generative artificial intelligence (GenAI), particularly ChatGPT, and how this interaction influences computational thinking and knowledge construction. Guided by a critical-constructivist framework, the research employed an action-research methodology structured around design thinking, prompt engineering, Python programming, and elevator pitch activities. Content analysis was conducted to trace the evolution of students’ cognitive strategies, with particular attention to how they formulated effective prompts, assessed and refined AI-generated responses, and iterated on their problem-solving approaches. Results show that while GenAI fosters creativity and computational fluency, it simultaneously raises challenges related to epistemic vigilance, critical reflection, and project feasibility. The study concludes that pedagogically grounded integration of GenAI requires balancing automation with the cultivation of analytical, ethical, and metacognitive capacities. These findings contribute to ongoing debates on AI-enhanced education by emphasizing the importance of structured and reflexive pedagogies to foster critical digital literacy and epistemic agency at the secondary level.

1 Introduction

Generative Artificial Intelligence (AI) is reshaping education by enabling personalized learning experiences and fostering essential 21st-century competencies such as problem-solving, creativity, and adaptive reasoning (Celik et al., 2024). Tools like ChatGPT and other large language models have shown considerable promise in supporting knowledge construction, enhancing self-regulated learning, and promoting computational thinking (Kim and Adlof, 2024; Yilmaz and Yilmaz, 2023). However, their integration into education also presents challenges—particularly regarding pedagogical implications, evolving human-AI interactions, and the development of higher-order thinking skills (Dimitriadou and Lanitis, 2023; Giannakos et al., 2024). Educators continue to seek strategies for using AI to cultivate critical and reflective thinking, beyond its use as a tool for automation and efficiency.

This study proposes a conceptual shift: from viewing AI merely as a tool to understanding it as a “technological subject”—a pedagogical actor capable of influencing cognitive processes and educational relationships (Balacheff, 1994; Luckin et al., 2016). This view demands a deeper theoretical understanding of how generative AI shapes meaning-making in learning environments.

Computational thinking—defined as the ability to approach problems systematically using core principles from computer science—is increasingly seen as a foundational skill in education (Hu, 2011; Liao et al., 2024). While some studies have examined how AI supports this skill, fewer have explored how structured, pedagogically mediated interactions with generative AI contribute to meaningful learning and the development of epistemic agency (Xia et al., 2025). AI-generated content often contains bias or inaccuracy, raising questions about how students validate information, refine their reasoning, and build epistemic vigilance (Bozkurt et al., 2024; Yan et al., 2024). The literature tends to focus either on technical capabilities or ethical risks, with limited empirical work on how students develop critical digital literacy through iterative, reflective engagement with generative models.

In this study, generative AI is understood as a “subject-technological actor” whose affordances and constraints are co-constructed through the pedagogical contract between students, teachers, and algorithms. From this lens, AI is not only a cognitive scaffold but also a catalyst for critical digital literacy.

The research was conducted at PrepaTec, the high school program of Tecnologico de Monterrey, a leading private university system in Latin America recognized for its educational innovation. This case is notable for its early adoption of AI-based tools and its systemic efforts to integrate future-ready skills into secondary education through active learning.

The core research problem guiding this study is that, although students increasingly engage with AI tools like ChatGPT, their interactions are often unstructured, superficial, or primarily instrumental limiting the development of deeper cognitive and metacognitive skills. Few empirical studies examine how pedagogically structured AI engagements support both computational thinking and critical digital literacy. The study addresses the following question: How can structured student-AI interactions in an educational setting foster active learning, critical engagement, and the development of computational thinking?

1.1 General objective

To explore how structured interactions with generative AI, within a critical-constructivist pedagogical framework, enhance students’ computational thinking and critical digital literacy.

1.2 Specific objectives

1. Design and implement an educational experience integrating generative AI, design thinking, and programming to promote active, reflective, and collaborative learning.

2. Analyze how students formulate prompts, interact with AI outputs, and engage in iterative problem-solving to develop epistemic vigilance and computational skills.

To this end, the study applies a Critical-Constructivist Analysis that combines constructivism—emphasizing knowledge construction through interaction (Von Glasersfeld, 1989; Kumar, 2006)—and critical realism, which highlights technology’s non-neutral, structuring role in education (Bhaskar, 1979; Fleetwood, 2014). This approach also draws on critical pedagogy (McLaren, 2020; Giroux, 2010; Apple and Apple, 2004), advocating for education that empowers learners to interrogate and transform technological and social systems.

The study employs a qualitative action-research methodology (Baskerville and Pries-Heje, 1999), incorporating active learning strategies such as design thinking and Python programming to examine how students formulate effective prompts, assess AI-generated content, and apply computational thinking to real-world challenges.

The article proceeds as follows: Section 2 presents the theoretical framework on computational thinking and AI in education. Section 3 details the methodology and instructional design. Section 4 presents the intervention design, followed by results in Section 5. Section 6 discusses the findings using the Critical-Constructivist lens. Section 7 concludes with key insights, study limitations, and recommendations for future research.

2 Theoretical framework

2.1 Approach to the problem of study

The educational system and society maintain an interdependent relationship, where social changes influence education while education, in turn, shapes society (Billett, 2006). The rapid advancement of artificial intelligence (AI) technologies has intensified this dynamic, necessitating a critical examination of their impact on teaching and learning processes. Generative AI, particularly ChatGPT, has demonstrated significant potential in fostering adaptive learning, enhancing computational thinking, and promoting student knowledge construction (Giannakos et al., 2024; Tang et al., 2024). However, the integration of AI into formal education requires a rigorous evaluation of its pedagogical implications (Alier et al., 2024).

As technological advancements frequently outpace curriculum adaptability (Bush and Mott, 2009), educators often bridge these gaps through hidden curricula (Apple and Apple, 2004), incorporating new pedagogical strategies that are not explicitly included in official programs. Generative AI presents both opportunities and challenges, such as biases in AI-generated content, the validation of information, and the potential over-reliance on automated responses (Yan et al., 2024). Therefore, AI integration in education must be critically assessed, considering its effects on cognitive development, problem-solving strategies, and digital literacy.

This study examines how ChatGPT mediates the teaching of computational thinking in high school students. By conceptualizing the classroom as a structured yet evolving learning system, it identifies patterns in student-AI interaction and evaluates the role of instructional design in optimizing AI-based learning experiences. A key focus is analyzing student-AI interactions within an educational framework that fosters meaningful learning, exploring how generative AI enhances cognitive and metacognitive skills. Specifically, this study examines students’ ability to refine prompts, critically assess AI-generated content, and apply design thinking methodologies to problem-solving. In this context, generative AI is also conceptualized as a semiotic artifact that mediates learning through language, code, and interaction patterns, becoming part of the communicative ecology of the classroom.

2.2 Theoretical conceptualization

This study is grounded in Critical-Constructivist Analysis, which integrates constructivism as an epistemological framework (Kumar, 2006) and critical realism as an ontological perspective (Bhaskar, 1979; Fleetwood, 2014). This dual theoretical stance is further enriched by a semiotic view of AI, which allows us to understand generative AI as a boundary-crossing artifact that introduces new forms of meaning-making into the educational system.

2.2.1 Constructivism and the role of AI in learning

From a constructivist perspective, knowledge is not passively transmitted but actively constructed through environmental interaction (Von Glasersfeld, 1989). Learning occurs through exploration, reflection, and engagement in meaningful tasks (Hennessy, 1993). In this study, generative AI is framed as an interactive tool that facilitates active learning, enabling students to test hypotheses, refine problem-solving strategies, and develop computational thinking skills (Kim and Adlof, 2024; Yilmaz and Yilmaz, 2023).

ChatGPT functions as a personalized learning assistant, allowing students to experiment with different prompts and receive immediate feedback, reinforcing self-regulated learning. Constructivism aligns with competency-based learning (Voorhees and Bedard-Voorhees, 2016), promoting the development of transferable skills in real-world contexts. Generative AI supports this approach by providing dynamic learning environments where students refine computational, creative, and analytical skills. Moreover, generative AI plays a mediating role like that of other symbolic tools, such as diagrams or code, and thus contributes to meaning construction as a semiotic mediator within a culturally organized space (Wertsch, 1991).

2.2.2 Critical realism and the structuring role of AI

From a critical realism perspective, generative AI is not merely a passive tool but an active mediator that reshapes educational interactions. This framework (Fleetwood, 2014) posits that reality exists independently of perception, yet social structures influence how knowledge is constructed. Generative AI redefines traditional teacher-student dynamics, requiring pedagogical adaptations to ensure meaningful and equitable learning experiences (Bozkurt et al., 2024; Dimitriadou and Lanitis, 2023).

The introduction of AI into the classroom alters epistemic authority, necessitating that students critically navigate AI-generated knowledge, distinguishing between accurate insights and algorithmic biases (Wu, 2024). Algorithmic bias and AI reliability are central concerns (Giannakos et al., 2024; Yan et al., 2024), as AI-generated responses reflect training data, which may contain cultural, linguistic, or systemic biases. Thus, AI literacy must be integral to computational thinking education, ensuring that students develop the skills necessary to assess, critique, and contextualize AI outputs (Tang et al., 2024). AI integration should transcend instrumental use, incorporating a critical approach that enables students to question, analyze, and ethically engage with AI-generated content. Critical realism provides the ontological ground for recognizing AI not as a neutral infrastructure but as a socio-technical actor with structuring effects on agency, knowledge, and power in the classroom.

2.2.3 Articulating constructivism and critical realism in education

Critical-Constructivist Analysis integrates these perspectives by recognizing that:

1. Students actively construct knowledge through engagement with AI tools (constructivism).

2. AI, as a technological structure, actively shapes learning interactions and educational practices (critical realism).

This dual approach enables an understanding of AI as both a scaffold for cognitive development and a force that reconfigures traditional pedagogical relationships (Joyce and Calhoun, 2024). By adopting Critical-Constructivist Analysis, this study examines how students interact with AI and how these interactions are mediated by the broader socio-technical structures in which learning occurs (Freire, 1970; Apple, 1980). In this integrated view, generative AI can be conceptualized as a fourth actor within the didactic contract—alongside the teacher, the student, and the knowledge—thus expanding the classical triangular model (Brousseau, 2002) into a tetrahedral system (Teacher–Student–Knowledge–Artificial Intelligence, for the sake of simplicity, we will refer as T-S-K-AI) that demands new pedagogical, ethical, and epistemological considerations.

2.3 Integrating generative AI in the classroom: the P → T model

As students interact with AI, they transition from an initial stage of proto-scientific understanding (P)—characterized by intuitive but unstructured ideas about AI—towards a theoretically grounded comprehension (T). This P → T transformation involves structured learning experiences that refine students’ perceptions of AI affordances and limitations. The model aligns with digital literacy research, which emphasizes explicit instruction in AI functionalities and constraints to foster informed and critical users (Bozkurt et al., 2024; Yan et al., 2024).

In this study, students evolved from viewing ChatGPT as a content generator to engaging in metacognitive strategies that optimized their interactions, reflecting a deeper conceptual understanding. This aligns with design-based research principles, emphasizing iterative learning through experimentation and reflection (Giannakos et al., 2024). This progression also reflects a semiotic transformation, as students begin to internalize the communicative logics of AI and reconfigure their own representations through dialogic engagement with the system. (Benenson and Bryan, 2024)

The pedagogical and semiotic dimensions discussed here are further operationalized in this study through a qualitative action research approach, as detailed in Section 3.

3 Methodology

This study adopts an integrative methodological approach, combining constructivist epistemology, action research, and content analysis to examine how generative artificial intelligence—specifically ChatGPT—facilitates meaningful learning among high school students. The methodological strategy is inspired by the work of Baskerville and Pries-Heje (1999), who propose integrating qualitative methods into action research. Within this framework, content analysis is used as a data coding and structuring technique, allowing for the systematic identification of emerging patterns and meaningful categories that enrich the understanding of student-AI interactions and their impact on learning. Our approach emphasizes a semiotic sensitivity to how generative AI mediates not only cognitive tasks but also communicative patterns, reshaping learning dynamics within digital environments.

The methodological framework is based on three fundamental pillars. First, critical realism is adopted as an ontological perspective (Cruz-Ramírez et al., 2022) to interpret how AI transforms students’ perception and understanding of the world. Second, it is based on the constructivist paradigm, which conceives meaningful learning (Díaz de León-López et al., 2021) as the unifying axis of the educational experience. In this context, generative AI is not developed solely as a tool, but as a technological mediator that enhances knowledge construction in adaptive and interactive environments. Finally, action research (García-Martínez et al., 2022) guides the design and development of educational interventions through iterative cycles of planning, implementation, observation, reflection, and adjustment, enabling continuous improvement of teaching practice.

From this perspective, generative AI is inserted as a transformative element within the didactic contract, acting as an interlocutor with a relative agency. Through the intentional design of activities, students construct knowledge reflectively, supported by simulations, interactive scenarios, and continuous feedback provided by the ChatGPT model. The competency-based education approach complements this framework by fostering the development of skills such as computational thinking, problem-solving, and effective communication with AI systems. The educational intervention was implemented with a group of 24 students from the multicultural high school program at Tecnologico de Monterrey, an institution distinguished by an educational model focused on the development of transversal competencies (Díaz de León-López et al., 2021).

The methodological design is articulated through an action research process structured in three phases: initial exploration, educational intervention, and evaluation-refinement (Pendergast et al., 2024). In the first phase, interactions between students and ChatGPT were systematically observed to identify emerging patterns in prompt formulation, problem-solving strategies, and initial perceptions about the use of generative AI. This initial observation established a baseline of students’ prior knowledge, anticipating pedagogical challenges and opportunities. Based on preliminary findings, the second phase implemented a structured learning sequence based on design thinking principles. Students participated in activities that included empathetic analyses of real-life problems related to AI’s ethical and functional use, ideation sessions aimed at generating innovative solutions, and prototype development by creating interactive games in Python. ChatGPT played a central role as a tool for exploration, iteration, and refinement of the computational solutions developed by the students.

The third phase of the study consisted of evaluating the learning outcomes, combining direct classroom observations with analysis of student-generated products. These products were evaluated based on their creativity, clarity, and application of computational thinking skills, particularly in their interactive game projects and elevator pitch presentations. Through content analysis, key patterns in student-AI interaction were identified, providing empirical evidence of the mediating role of generative AI in active learning processes. Given the iterative nature of action research, the findings from this phase provided feedback into the pedagogical design, allowing for adjustments and improvements based on concrete experience.

The qualitative analysis was conducted using a systematic open, axial, and selective coding process, adapted from the approaches proposed by Mora-Ochomogo et al. (2021) and Chavira-Quintero and Olais-Govea (2023). In the first stage, emerging patterns were identified in formulating prompts, problem-solving strategies, and perceptions about ChatGPT. Subsequently, relationships between categories were established to observe the evolution of how students refined their interactions with AI. Finally, the findings were integrated into an explanatory model that articulates how AI can enhance the development of computational thinking and active learning. This analysis process was aligned with the theoretical constructs guiding the study, including computational thinking (problem structuring, algorithmic reasoning), active learning (iterative participation, creative problem-solving), and technological mediation (adaptive learning, contextualized knowledge construction).

Regarding the implementation context, the intervention took place during the January–May 2025 semester at the San Luis Potosí campus of Tecnologico de Monterrey, in the elective course “Engineering Project,” lasting 14 weeks and with a workload of 3 h per week. The course, taught by a professor from the School of Engineering and Sciences, has a flexible approach to vocational exploration in engineering fields. The participation included the entire group, with no individual selection criteria, allowing for a homogeneous intervention within the school context. The students come from a high socioeconomic background, with consistent access to the internet and modern hardware, which minimized potential digital divides and ensured suitable technical conditions for the intensive use of artificial intelligence tools. This characteristic should be considered when interpreting the generalizability of the findings, as it may not reflect the conditions of other educational settings with limited technological resources or connectivity.

Activities focused on integrating generative artificial intelligence into the training process through designing interactive games, formulating effective prompts, and developing short speeches to strengthen communication and argumentative skills. The group of students, composed of 13 women and 11 men between the ages of 17 and 19, is part of a multicultural program promoting internationalized experiences and acquiring a third language. All participants had constant access to the internet and personal technological devices, facilitating the implementation of advanced digital tools without technical restrictions. Since the study did not include interviews, surveys, or experimental manipulations, informed consent or institutional ethics committee approval was not required.

Figure 1 provides a visual representation of the research design, illustrating the interconnected phases and their iterative cycles of reflection and refinement. This diagram represents the three interrelated phases of the study: Initial Exploration, Educational Intervention, and Evaluation and Refinement. The process follows an iterative cycle of continuous reflection, integrating systematic observation, instructional design, and analysis of student outcomes. This methodological approach combines reflection and practical implementation, ensuring that learning activities align with educational objectives and evolving classroom dynamics. By integrating advanced technological tools with innovative pedagogical strategies, this study not only evaluates AI’s impact on skill development but also offers actionable recommendations for its implementation in future educational contexts.

Figure 1
Flowchart illustrating the implementation and research phases of a study. The process involves three phases: Initial Exploration, Educational Intervention, and Evaluation and Refinement. The chart highlights steps like systematic observation, pattern identification, perception of generative AI, and assessment of learner skills. It includes activities such as interactive game development, brainstorming, and classroom observation. An iterative cycle of continuous reflection is emphasized. Color-coded diagram references denote each phase.

Figure 1. This figure outlines the three key phases of the research design: (1) the Exploration Phase, where the initial context and pedagogical possibilities of generative AI were examined; (2) the Educational Intervention Phase, which involved instructional design, prompt engineering, and student engagement through computational tasks; and (3) the Evaluation and Refinement Phase, where student outcomes were assessed and the teaching strategies were iteratively improved.

4 Didactic intervention design

4.1 Phase 1: initial exploration

In this first phase, students engaged with ChatGPT to solve basic computational thinking problems. Observations and interaction logs revealed recurring challenges, particularly in formulating clear and detailed prompts. In most cases, students posed vague questions, which resulted in imprecise or incomplete AI responses.

An open coding process was conducted to structure the analysis, classifying student-AI interactions according to key areas for improvement. Many students did not provide sufficient context in their prompts, highlighting the need for explicit instruction in effective prompt formulation.

Three exploratory teaching interventions were carried out over three semesters, each offering valuable insights into how ChatGPT shaped different learning dynamics. These interventions—focused on physics problem-solving, audiovisual content creation, and drone programming—contributed critical reflections that informed the design of the current instructional sequence.

4.2 Foundations from previous interventions

As part of the coding process, three prior experiences designed and implemented by the same teacher-researcher with high school students at PrepaTec were retrieved and analyzed (see Appendix B). These interventions were essential in identifying pedagogical challenges and affordances associated with AI use in diverse instructional settings.

The cumulative reflection on classroom practices and the growing presence of generative AI in education and professional contexts served as the main drivers for the innovation reported in this study. Each prior experience had distinct learning objectives: (i) to strengthen problem-solving skills in physics, (ii) to foster creativity and digital literacy through video creation, and (iii) to introduce programming and design thinking through drone-based challenges.

In each case, ChatGPT was integrated with a specific pedagogical role tailored to the activity’s goals and constraints, as we can see in Table 1.

Table 1
www.frontiersin.org

Table 1. This table compiles prior educational interventions involving ChatGPT, detailing the context of each experience, its specific objectives, the role played by the AI tool, and the observed dynamics or outcomes in the classroom.

Each intervention was evaluated according to its respective learning objectives:

• In the physics activity, students used ChatGPT to clarify concepts and reflect on their learning process (see Appendix B.1).

• The audiovisual experience employed design thinking to combine storytelling and ethical considerations in using AI tools (see Appendix B.2).

• The drone programming sequence integrated AI with design thinking and collaborative coding tasks (see Appendix B.3).

This systematization informed the initial coding scheme and analytical dimensions, helping to define the instructional and methodological priorities for the central sequence. Overall, the analysis revealed progress in creativity and performance, as well as persistent difficulties in critical engagement and communication with AI.

From open coding, emerging categories related to learning outcomes were identified. Table 2 summarizes the impact of each experience in five dimensions:

Table 2
www.frontiersin.org

Table 2. This table summarizes the learning outcomes derived from previous interventions, categorized into five key areas: understanding of the topic, creativity promotion, technological skills development, communication with AI, and knowledge of different AI tools.

Table 3 also analysis student performance and the cognitive skills developed:

Table 3
www.frontiersin.org

Table 3. This table presents an analysis of student progress in four interconnected categories: performance with AI, communication with AI, creativity with AI, and development of critical thinking.

The results highlighted difficulties in prompt formulation and a tendency to passively accept AI-generated content. These findings motivated the development of a structured intervention based on design thinking and Python programming.

4.3 Phase 2: educational intervention

Based on these insights, a teaching sequence was designed to strengthen students’ critical and creative thinking skills through the structured use of generative AI and design thinking methodology.

4.3.1 Objectives of the didactic sequence

1. Develop critical thinking and problem-solving skills through effective prompt formulation.

2. Promote creativity and innovation through design thinking in the development of an educational game.

3. Strengthen basic programming competencies, including code debugging and communication with AI.

4.3.2 Stages of the didactic sequence

Figure 2 illustrates the three stages of the sequence:

1. Solution design through design thinking.

2. Prompt training and Python programming.

3. Project evaluation through testing and elevator pitch presentations.

Figure 2
Flowchart illustrating a nine-step process over three stages. Stage 1 includes team building, design thinking, action plan definition, and prototype design, lasting three weeks. Stage 2 covers ChatGPT training, IDE Python training, and business canvas model over four weeks. Stage 3 involves game testing, elevator pitch, and feedback in two weeks. Each stage is represented in separate colored boxes with icons.

Figure 2. This figure illustrates the three main stages of the instructional sequence. In Stage 1, students formed teams, applied Design Thinking methodology, created an action plan, and designed a prototype. Stage 2 focused on training students in the use of ChatGPT, working with a Python IDE, and developing a Business Model Canvas. Finally, Stage 3 involved game testing, the creation of an elevator pitch, and receiving peer and instructor feedback for refinement.

4.3.3 Stage A: design thinking for problem identification

Objective: Use design thinking methodology to develop empathy, creativity, and problem definition skills.

Process:

1. Empathy: Identify user problems using Ishikawa diagrams.

2. Define: Frame problems using the “How might we…?” method.

3. Ideate and Prototype: Develop preliminary ideas through prioritization tools and visual sketching.

4. Test: Refine ideas based on expert feedback.

4.3.4 Stage B: prompt formulation and programming

Objective: Train students in effective AI communication and introductory programming.

Activities:

1. Introduction to ChatGPT and prompt testing.

2. Python coding using an IDE.

3. Debugging and refinement with AI support.

Table 4 summarizes the key features of effective communication with ChatGPT.

Table 4
www.frontiersin.org

Table 4. Essential features for effective AI communication.

4.3.5 Stage C: testing and elevator pitch

In this stage, students tested their games and delivered concise pitches to external evaluators. These presentations assessed the originality, feasibility, and clarity of their solutions.

The goal was to refine students’ interaction with AI and their ability to present and defend their projects. This stage reinforced their capacity to synthesize, communicate persuasively, and critically reflect on their design process.

4.4 Phase 3: evaluation and refinement

The final phase analyzed the learning outcomes and adjusted the methodology based on concrete evidence. Evaluation included:

1. Direct observation of student-AI interaction.

2. Analysis of student-generated products (games and pitches).

3. Expert feedback from technology and entrepreneurship professionals.

4.5 Final considerations

This didactic sequence offers a practical framework for integrating generative AI into high school classrooms. It promotes critical thinking, computational literacy, and metacognitive skills through an iterative and adaptive process grounded in critical-constructivist pedagogy.

The experience demonstrates how generative AI can serve as a cognitive partner when embedded in well-structured learning environments. The continuous observation and analysis facilitated by action research ensured pedagogical alignment and relevance.

For readers interested in the pedagogical rationale and iterative development process behind this instructional design, detailed descriptions of the three foundational teaching interventions are included in Appendix B. These accounts support the reflective and empirical basis for the teaching sequence reported in this study.

5 Results

This section presents the findings obtained after implementing the didactic intervention, structured according to the previously described phases. The analysis focuses on students’ performance in prompt formulation, the evolution of their interaction with ChatGPT, and the quality of their final products. Additionally, emerging categories identified through content analysis are highlighted, reflecting the steps followed in this analytical technique.

5.1 Empathize phase

The intervention was structured in a 14-session teaching sequence, based on a Design Thinking approach and the intentional use of generative AI as a supporting resource in different project phases. During the empathize phase, students created an Ishikawa diagram (cause-effect diagram) to identify the factors contributing to cognitive decline with aging. Figure 3 shows an example of this diagram. To construct the diagram, students drew from three information sources: generative AI tools, interviews with older adults, and online research.

Figure 3
Cause-effect diagram illustrating factors leading to cognitive decline with age. Causes include lack of interaction, emotional stimulation, physical movement, and reading, as well as neuronal degeneration, and diseases like Alzheimer's. Effects noted are loss of social skills, increased stress, decreased motor coordination, and deficiencies in vitamins like B12, Omega-3, and D. These contribute to symptoms such as memory loss, reduced intellectual stimulation, and cognitive issues.

Figure 3. This diagram represents a cause-and-effect analysis created by students during the empathize phase of the Design Thinking process. The central effect identified was the loss of cognitive abilities in older adults. Contributing causes included lack of social interaction, neuronal degeneration, illnesses, physical inactivity, absence of reading habits, and vitamin deficiencies.

The analysis revealed that the main contributing factors included mental illnesses, lack of physical or mental activity, social isolation, and poor nutrition. To mitigate these effects, students proposed promoting social engagement, encouraging physical exercise, providing preventive health information, and implementing reward-based systems.

This phase provided students with their first opportunity to explore AI as a collaborative and informational partner rather than a mere provider of answers. By triangulating information from ChatGPT, interviews, and external sources, they engaged in a critical validation process that reinforced metacognitive awareness and epistemic vigilance. The integration of AI in the empathize phase allowed students to contrast algorithmic information with lived experiences and contextual knowledge—an essential step toward developing both computational thinking and ethical reflection on technology use. This outcome aligns with the study’s constructivist framework by fostering knowledge construction through dialogic interaction between students and diverse data sources.

5.2 Define phase

In the define phase, students were asked to complete the “How might we?” activity, which aimed to generate solutions for the causes identified in the empathize phase. The results are presented in Table 5.

Table 5
www.frontiersin.org

Table 5. This table illustrates student-generated problem statements formulated during the define phase of the design thinking process.

Figure 4 presents the word clouds generated from student responses in this phase. It reaffirmed that cognitive decline was primarily linked to mental illnesses, lack of physical or mental activity, social isolation, and poor nutrition, as mentioned in the cause-effect diagram activity. Based on these insights, students concluded that fostering social interaction, engaging in physical activity, providing preventive health education, and rewarding participation could help mitigate cognitive deterioration.

Figure 4
Two cloud-shaped word clusters with stylized text. Cluster (a) contains words like nutrition, reading, exercise, addiction, disease, mental, loneliness, and health. Cluster (b) includes information, games, challenge, support, rewards, goals, and prevention.

Figure 4. This figure displays two-word clouds derived from student inputs: (a) shows frequently mentioned causes identified in the Ishikawa diagrams, highlighting perceived contributors to the central problem; (b) presents key ideas proposed in the How might we…? formulation activity, emphasizing potential solution pathways generated during the ideation phase.

In this phase, students transitioned from problem identification to structured ideation by formulating “How might we…?” questions. This required them to translate complex social and health-related issues into solvable design challenges, reinforcing algorithmic thinking and the decomposition of problems into smaller, actionable components. The collaborative ideation, supported by generative AI, enhanced their ability to refine problems and propose preliminary solutions, while also developing their skills in abstraction and generalization—two core dimensions of computational thinking. The results also demonstrate an early awareness of user needs and ethical considerations, suggesting that students began to balance technical feasibility with human-centered design principles.

5.3 Prompt development and refinement

The progressive refinement of students’ prompt formulation was one of the clearest learning trajectories observed during the intervention. This evolution was assessed using a custom rubric with six key categories: Prompt Clarity (PC), Personality Adopted (PA), Step by Step (SS), Example Use (EU), Conversation Thread (CT), and Works Properly (WP). These categories reflect the essential features of effective interaction with ChatGPT in coding tasks and support the development of computational thinking (see Table 6).

Table 6
www.frontiersin.org

Table 6. Proposed rubric for prompt efficiency assessment.

The rubric presented in Table 6 was designed specifically for this research as an analytical tool to structurally evaluate the interactions between students and generative artificial intelligence. While we did not employ a formal statistical validation process for the rubric (e.g., through factor analysis or internal consistency measures), its development was based on widely accepted principles in instructional design and on recommendations for formulating effective prompts in conversational AI settings.

To measure improvement, each team’s ChatGPT conversations were evaluated at two points: an initial attempt and a final submission. These were scored using a five-point Likert scale based on the aforementioned dimensions.

Among the six evaluated dimensions, prompt clarity showed the most evident improvement. At the start of the project, students often submitted vague instructions, leading to incomplete or incorrect outputs from the AI. As the intervention progressed, students learned to formulate more precise and contextualized prompts. For example:

• Initial prompt: “Write a Python code for a game using a 5×5 cell matrix.”

• Final prompt: “Write a Python code for a game using a 5×5 cell matrix where the player connects different colors in a randomly generated order (out of five colors). The game should track the time taken to complete, ask if the player wants to play again, and restart with a new random order. Additionally, add a button to display the required sequence.”

These refinements led to more accurate and functional code generation. The Python code obtained through ChatGPT allowed students to successfully develop working prototypes of their games (see Figure 5). As further evidence of students’ progress in prompt design and computational problem-solving, Appendix A includes the complete code of a Connect 4 game developed by one of the teams. This final product illustrates the application of generative AI in producing functional, user-oriented applications through iterative prompt refinement.

Figure 5
Instructions for a color pattern game. One window explains to click on colors in the correct order as part of a sequence of five colors. Another window gives the pattern: orange, yellow, green, green, yellow. The game interface displays a grid of colored squares with buttons labeled

Figure 5. This figure presents three key interface windows of the game prototype: (a) the Instructions Window, which guides users on how to play; (b) the Sequence Window, displaying the color pattern the player must memorize and replicate; and (c) the Game Interface, where the interactive gameplay takes place.

Students also demonstrated progress in other communication strategies, such as breaking down complex tasks into subtasks, maintaining a coherent conversational thread, and providing examples. These skills are consistent with the principles of design thinking and iterative learning that underpin the intervention.

To visually represent this progress, a radar chart was created to compare students’ performance across the six categories between their initial and final prompts (see Figure 6, Radar chart displaying the evolution of teams in PA, PC, WP, CT, EU, and SS).

Figure 6
Radar charts for four games: (a) Game of Life, (b) Connect 4, (c) Mine Weeper, (d) Guess the Number. Each chart compares initial (dark blue) and final (light blue) values across six metrics: PA, PC, WP, CT, EU, SS.

Figure 6. This figure presents radar charts illustrating the progression of different student teams across six indicators: Prompt Clarity (PC), Personality Adopted (PA), Step-by-Step Reasoning (SS), Example Use (EU), Conversation Thread (CT), and Functionality (Works Properly—WP). Charts are shown for four game projects: (a) Game of Life, (b) Connect 4, (c) Minesweeper, and (d) Guess the Number. Each chart compares performance in the first and final iterations of AI interaction, highlighting improvements in prompt engineering and computational thinking.

Figure 6 shows a consistent improvement across all teams. Final prompts exhibited clearer instructions, more appropriate use of expert personas, better task structuring, and more coherent interactions. The generated code also improved in functionality and alignment with the intended game designs.

These results suggest that the intervention fostered the development of sophisticated strategies for AI interaction. As students iterated and received feedback—both from the AI and from peers—they became more autonomous in refining their queries and verifying AI outputs. This reflects a meaningful integration of computational and design thinking, emphasizing metacognitive awareness and epistemic vigilance as key learning outcomes.

5.4 Elevator pitch analysis

As a final integrative activity, students presented an elevator pitch aimed at communicating their game proposals in a concise and compelling manner. The evaluation focused on four content analysis categories: (1) Project Definition (D), (2) Feasibility (V), (3) Originality and Creativity (OC), and (4) Investor Appeal (AI). These criteria were defined as follows: Project definition: Clarity in describing the product or service, including key elements, usage, and target customer.

Project Definition: Clarity in describing the product or service, including its key features, use case, and target audience.

Feasibility: Reasoning regarding the product’s viability, supported by references to similar solutions or existing technologies.

Originality and Creativity: Novelty of the idea and creative enhancement of existing formats.

Investor Appeal: Ability to distinguish the project from others and articulate a potential revenue model.

The evaluation rubric is presented in Table 7. Students’ scores across these four dimensions are visually summarized in Figure 7, which displays a comparative weighting of key characteristics per team. In addition, a qualitative synthesis of each team’s pitch—including excerpts, strengths, and areas for improvement—is presented in Table 8, offering insight into how students framed their proposals and the recurring communication challenges observed.

Table 7
www.frontiersin.org

Table 7. This table presents the rubric used to assess student elevator pitches based on four key criteria: defines its project, product, or service; feasibility; originality and creativity; and attractiveness compared to other projects and ability to sell it to investors.

Figure 7
Bar chart comparing four criteria for four games: Game of Life, Connect 4, Mine Weeper, and Guess a Number. Criteria include Definition, Feasibility, Originality and Creativity, and Attractiveness to Investors, each rated up to 10.

Figure 7. This figure presents the evaluation of student elevator pitches based on four key characteristics: Problem Definition, Feasibility, Originality and Creativity, and Attractiveness to Investors. The chart reflects how each aspect was weighed and assessed to determine the overall quality and persuasive impact of the pitches.

Table 8
www.frontiersin.org

Table 8. Strengths and weaknesses of the elevator pitch.

Although formal statistical validation (e.g., factor analysis or inter-rater reliability) was not conducted, the rubric’s structure is consistent with authentic evaluation practices in design-oriented education. Specifically, Table 7 was co-designed by the course instructor in collaboration with external evaluators from the Entrepreneurship and Innovation Park at Tecnologico de Monterrey (San Luis Potosí campus). These evaluators regularly assess early-stage student ventures and contributed their expert judgment to ensure the rubric captured relevant entrepreneurial dimensions such as viability, novelty, and investment appeal.

The rubric was intended as a formative and reflective instrument to guide students’ thinking about how to present their ideas persuasively and realistically. It was not designed for summative or high-stakes assessment. As such, the use of a 4-point scale (Excellent, Good, Satisfactory, Insufficient) was arbitrary and designed to facilitate feedback and comparative reflection.

The results reveal that most teams performed well in defining their project ideas and incorporating creative elements. However, their performance was more limited in articulating feasibility and attractiveness to investors. While they conveyed clear and often innovative visions, they struggled to formulate compelling arguments about development viability or monetization strategies.

These observations align with broader patterns identified in the qualitative coding: students showed notable improvement in prompt formulation, strategic use of AI for ideation, and communication structuring—all of which contributed to the clarity and originality of their presentations. However, difficulties in feasibility assessment suggest a need for greater scaffolding in entrepreneurial reasoning and economic literacy within AI-mediated learning activities.

In sum, the elevator pitch activity functioned not only as a culminating demonstration of students’ creative work, but also as a formative assessment of their ability to synthesize design thinking, computational reasoning, and persuasive communication. The collaborative design of the rubric with domain experts and its use in a reflective, educational context underscore its value as a tool to support AI-enhanced project-based learning. These insights serve as a valuable bridge into the reflective discussion that follows in Section 6.

6 Discussion

This section revisits the central research question—how structured student–AI interactions in an educational setting can foster active learning, critical engagement, and the development of computational thinking—by interpreting the findings through a critical-constructivist lens. The structured instructional sequence, which included prompt formulation, design thinking, Python-based coding, and elevator pitch preparation, enabled students to iteratively refine their outputs, engage in critical dialogue, and assume increasing responsibility in the design and communication of their projects. These interactions with generative AI, when framed by pedagogical intentionality, supported improvements in computational reasoning, reflective thinking, and epistemic vigilance. The discussion that follows draws on these outcomes to explore how the pedagogical integration of AI reshapes learning processes, roles, and outcomes in the classroom.

6.1 AI as an interlocutor in the interaction and evolution of the didactic contract

The findings show that using generative AI in the classroom transforms traditional teaching-learning dynamics, particularly in the relationships among students, teachers, and technology. Rather than acting as a passive tool, AI is perceived as an active interlocutor that requires strategic interrogation to produce relevant responses (see Section 5.1). This form of interaction marks an evolution of the didactic contract: students are not merely responding to teacher prompts but are also required to formulate effective prompts for AI, reshaping their epistemic roles within the classroom.

This aligns with Yilmaz and Yilmaz (2023), who highlight AI as a “discursive co-participant,” and with Hu (2011) and Liao et al. (2024), who emphasize the mediating role of technology in knowledge production. A metacognitive process emerges: students reflect on how to interact with AI, iteratively refining their prompts while deepening their understanding of the task. Von Glasersfeld (1989) underscores the importance of error and revision in authentic learning, which is observable in the cycles of interaction described in Section 5.1. The progression illustrated in Figure 6 offers additional insight into how students’ interactions with AI evolved throughout the instructional sequence. The radar charts reflect growth across six key indicators: Prompt Clarity, Personality Adopted, Step-by-Step Reasoning, Example Use, Conversation Thread, and Functionality. These indicators were derived from axial coding of earlier iterations of similar classroom interventions, forming part of the pedagogical knowledge built through reflective teaching practice. The observed progression suggests that students internalized the logic of prompt refinement and learned to modulate their queries based on feedback, thereby improving the coherence and functionality of AI-generated responses. Teams that exhibited the greatest gains tended to display early experimentation and more frequent iteration, often encouraged by teacher scaffolding or peer modeling. While the results are not statistically generalizable, they illustrate the value of iterative, scaffolded design in fostering epistemic agency and computational reasoning through AI-mediated learning. Celik et al. (2024) and Dimitriadou and Lanitis (2023) emphasize student agency in AI-mediated learning, also reflected in students’ increasing autonomy. The teacher’s role must shift, as Bozkurt et al. (2024) and Yan et al. (2024) argue, from content transmitter to critical mediator.

These dynamics suggest that the traditional didactic triangle (T-S-K) must be reconceptualized to include AI as a fourth element—transforming it into a tetrahedron (T-S-K-AI) that reflects a model of distributed agency. This reconfiguration implies a rebalancing of power, responsibility, and initiative among human and non-human participants, demanding a reconsideration of pedagogical roles and interactions in digitally mediated learning environments.

6.2 Validation of information, critical thinking, and reflection as the axis of learning

A cross-cutting finding was the emphasis students placed on validating AI-generated information. Far from accepting it uncritically, they demonstrated awareness of its tentative and fallible nature. As shown in Section 5.2, students consistently compared ChatGPT outputs with other sources and discussed their plausibility in class, activating critical thinking and epistemic vigilance.

This aligns with Kumar (2006) and Von Glasersfeld (1989), who affirm that knowledge is constructed through active evaluation rather than passive reception. The development of epistemic agency in students mirrors Selwyn et al.’s (2020) notion of “pedagogical assemblages,” where learners engage with socio-technical systems reflexively. Rather than treating AI as an authoritative voice, students appropriated it as a tentative starting point, thereby assuming responsibility for their own meaning-making.

Furthermore, as illustrated in Sections 5.2 and 5.3, generative AI operated as a semiotic mediator that introduced new communicative dynamics into the classroom. Students often relied on AI outputs as initial scaffolds for interpretation and production, refining their responses through cycles of questioning and elaboration. This underscores the importance of cultivating critical semiotic awareness—an understanding of how meaning is shaped and negotiated when interacting with algorithmic discourse. AI thus served not only as a tool for information access but as a catalyst for reflective thought and dialogic knowledge construction.

These classroom behaviors offer early evidence of what we conceptualize as epistemic agency: the student’s capacity to make informed, autonomous decisions about the credibility, use, and contextual relevance of AI-generated content. While this construct is not quantitatively measured in the present study, it is discussed interpretively as a central feature of critical engagement. In an ongoing line of research, we are developing a formal operationalization of epistemic agency based on coding and frequency analysis of student-AI interactions—work that complements and extends the qualitative findings reported here.

6.3 Creativity, design, and social responsibility

Generative AI enabled divergent exploration and iterative refinement, fostering creativity and design thinking. Kim and Adlof (2024) argue that creative thinking in AI-mediated contexts requires both expansive ideation and critical selection. This was evident in the students’ projects (Section 5.3), where AI outputs were not simply accepted but reconfigured as part of creative solutions.

Von Glasersfeld (1989) and Kumar (2006) frame this as active meaning-making. Giannakos et al. (2024) and Fleetwood (2014) call for an ethical dimension to digital creativity, which was echoed in classroom discussions around algorithmic limitations and societal impact (see Sections 5.3 and 5.4). Students demonstrated reflexivity about how AI structures knowledge and the implications of that structuring in social contexts.

As the activities in Section 5.3 demonstrate, both students and teachers exercised pedagogical agency in shaping the conditions under which AI was deployed for meaningful learning. Rather than defaulting to tool-centered or technocentric practices, participants critically evaluated how AI could support or hinder ethical decision-making, collaboration, and innovation. This highlights the growing importance of cultivating digital capabilities not merely as technical proficiency, but as an extension of responsible and socially aware creative engagement.

6.4 Communication, elevator pitch, and argumentative synthesis

The elevator pitch activity pushed students to synthesize complex ideas into coherent, persuasive narratives. As seen in Section 5.4, they developed stronger argumentative clarity, despite challenges in articulating feasibility and investment potential. The evaluation rubric (Table 7) and content analysis (Table 8) show that while creativity and definition were strong, business modeling remained underdeveloped.

This supports Bozkurt et al. (2024) and Mora-Ochomogo et al. (2021), who emphasize communication skills as central in AI-mediated environments. The iterative nature of these presentations, often rehearsed through ChatGPT, aligns with Von Glasersfeld (1989) and Kumar (2006) in treating explanation as a key indicator of understanding. Students modulated tone, structure, and logic in real time, using AI as both rehearsal partner and conceptual mirror.

Importantly, the evolution in students’ discourse—as documented in Section 5.4—reflects broader transformations in instructional design. Teachers were not simply transmitters of content but facilitators who crafted opportunities for AI-augmented synthesis. Through structured scaffolding, iterative refinement, and audience-focused framing, students were guided to use AI not as a content generator but as a catalyst for refining their communication. These shifts reaffirm the value of pedagogical intentionality in ensuring that AI serves as a stimulus for argumentation, reflection, and critical expression, rather than a shortcut for delivering pre-packaged ideas.

6.5 Implications for teaching practice and educational research

This study highlights a necessary redefinition of the teacher’s role—not just as content expert, but as epistemic mediator, critical designer, and ethical guide. This reconfiguration, observable throughout the teaching sequence (Sections 5.1–5.4), aligns with frameworks proposed by Giannakos et al. (2024), who advocate for teacher agency in human-AI hybrid classrooms.

The tetrahedral model introduced in this study (T-S-K-AI) offers a new grammar for educational interaction in the age of algorithmic co-participation. It makes visible the performative entanglements of agency, discourse, and intentionality, allowing educators to assess when, how, and why AI should intervene.

Fleetwood (2014), Nellhaus (1998), and Hu (2011) remind us that educational systems are not just transmitters of knowledge but performative arenas where agency is negotiated. Empirical, context-sensitive research is crucial for understanding how AI technologies reconfigure these negotiations on the ground.

In this study, we have incorporated a reflection on how the student, beyond being a passive recipient of content, assumes an active role as a prompt designer, critical evaluator of AI-generated information, and co-creator of digital products. This new role involves developing competencies such as epistemic vigilance, ethical decision-making, and metacognitive autonomy. In this sense, the student not only learns with AI, but also about AI and through AI, adopting functions that were previously exclusive to the teacher or instructional designer.

These pedagogical transformations also raise broader institutional questions. While this study focused on classroom practice, it underscores the importance of systemic support for meaningful AI integration. Teachers need more than digital tools—they require shared professional spaces, robust ethical guidelines, and institutional conditions that support sustained inquiry and innovation. The design-based, participatory approach adopted here offers a replicable framework for researching and iteratively improving AI-enhanced educational practices across diverse contexts.

By way of closing: While this study centers primarily on the evolving role of the teacher in AI-mediated environments, it is equally important to acknowledge the transformation of the learner’s role. As generative AI becomes a more active participant in the learning process, students are increasingly positioned not just as recipients of knowledge, but as prompt engineers, critical interlocutors, and epistemic agents. This evolving learner–AI–knowledge relationship, a subcomponent of the broader didactic tetrahedron proposed here, suggests the emergence of new pedagogical contracts. Although a full exploration of these dynamics exceeds the scope of this study, our observations highlight the importance of fostering student autonomy, creativity, and critical engagement in order to support meaningful, sustainable learning in AI-enhanced classrooms.

6.6 Theoretical extensions: sociomateriality and epistemic agency

In light of recent empirical studies published in 2024 and 2025, we further underscore the novelty of this work by situating it in contrast to existing literature on AI-mediated learning. While prior studies—such as those by Hu (2011), Giannakos et al. (2024), and others—have examined generative AI as a tool for enhancing student engagement, creativity, or task performance, they often do so from a predominantly instrumental or techno-constructivist lens. In contrast, our study introduces a critical-constructivist and epistemologically grounded framework that reconceptualizes AI as a semiotic and epistemic actor within the pedagogical contract. The proposed T–S–K–AI model reconfigures traditional educational interactions by positioning AI not merely as a tool, but as a co-participant in meaning-making and cognitive mediation. Moreover, the study provides concrete illustrations of how instructional design and teacher agency are central to shaping students’ epistemic agency in AI-mediated environments. These contributions extend beyond the scope of most existing work, offering a theoretically robust and pedagogically actionable model that is grounded in empirical classroom practice.

Beyond the pedagogical perspective developed in this study, the sociomaterial approach offers a complementary theoretical lens to understand the entanglement of human and nonhuman actors in educational settings. From this perspective, learning is not solely a cognitive activity but a material enactment shaped by the dynamic interplay between students, teachers, technologies, and physical environments (Fenwick, 2015; Johri, 2022). This view draws significantly from Actor-Network Theory (Latour, 2005; Hetland, 2012), which challenges conventional separations between human and technological agency by conceptualizing learning as an emergent property of relational assemblages that include both social and material actants. In educational research, ANT-inspired sociomateriality has been instrumental in analyzing how technologies like AI participate in the construction of agency, knowledge, and practice.

Empirical work such as Verster and van den Berg (2022) and Newman (2023) illustrates how AI tools, classroom artifacts, and institutional norms co-produce agency and learning experiences. While our proposed T–S–K–AI model focuses on the didactic contract and pedagogical agency, future work could extend it using insights from sociomateriality and ANT to explore how epistemic agency emerges from these broader assemblages of human and nonhuman relations.

7 Concluding remarks

Integrating generative artificial intelligence, specifically ChatGPT, into a structured instructional sequence demonstrated strong potential to foster computational thinking and design thinking skills in high school students. The findings confirm that AI-mediated learning can enhance problem-solving, support iterative reasoning, and promote creativity—provided its use is pedagogically grounded and critically mediated. Through iterative engagement with AI, students developed key competencies such as empathy, structured reasoning, and the ability to prototype and test digital solutions—skills that are essential in both academic and professional contexts.

The results underscore the value of combining traditional pedagogical strategies with emerging technologies to enrich the learning process. The structured use of design thinking methodologies and visual tools such as Ishikawa diagrams helped students deepen their understanding of root causes, refine their conceptual frames, and generate innovative responses. This aligns with research suggesting that the impact of AI in education is significantly amplified when situated within reflective, exploratory, and iterative learning frameworks. Yet, the study also confirms that technological tools alone are insufficient: pedagogical design must ensure that learners interact with AI in ways that promote epistemic vigilance, critical questioning, and thoughtful decision-making.

One of the most notable advances was the improvement in prompt formulation, which became more precise, contextualized, and strategic throughout the project. This progression reflects students’ growing sophistication in interacting with AI and underscores the relevance of this skill in contemporary education and professional environments. However, persistent challenges were also observed. Students often struggled to critically validate AI outputs or justify the feasibility and market potential of their solutions. As revealed in the elevator pitch activity (Section 5.4), while students proposed technically sound and creative ideas, many lacked the strategic language and economic literacy necessary to assess their viability. This points to the need for scaffolding in areas such as entrepreneurial thinking, competitive analysis, and the ethical implications of digital innovation.

Overall, this study provides evidence that generative AI, when embedded in well-designed learning environments, can serve as a catalyst for both computational development and critical reflection. These conclusions are directly grounded in the learning patterns and student performances presented in Sections 5.1 through 5.4 and elaborated through the interpretative lens in Section 6.

7.1 Study limitations

This study was conducted within a single high school context, following a qualitative action-research design centered on theoretical saturation rather than statistical generalizability. As such, the insights produced are contextually grounded and analytically transferable, rather than representative of broader populations. The focus on a specific instructional sequence and the extended temporal scope of the teacher’s reflective practice enabled rich, theory-informed interpretations of student engagement and learning. This aligns with grounded theory principles and the logic of theoretical sampling that underpins the qualitative paradigm.

Similarly, the absence of a control group is not a limitation in the traditional sense, but a methodological feature of the study’s design. Rather than seeking causal comparison, the purpose was to understand the situated dynamics of learning with generative AI and to explore how structured pedagogical design mediates such interactions. The findings therefore reflect the intentional and iterative nature of the teaching interventions, not experimental manipulation.

Potential interpretive bias is always a consideration in qualitative research. In this study, such risks were addressed through prolonged engagement with the teaching context, a transparent account of the coding process, and the alignment of findings with a well-defined critical-constructivist framework. Nevertheless, we acknowledge that the dual role of the teacher-researcher may have influenced certain interpretations, and we note this as an inherent limitation.

Finally, this study did not assess long-term retention or transfer of computational thinking skills. While short-term improvements in prompt formulation, reasoning, and creative problem-solving were documented, a longitudinal design would be necessary to evaluate sustained learning outcomes over time. We consider this an important direction for future research.

7.2 Future directions

Drawing on the patterns identified in Figures 57, we propose the following recommendations to enhance the pedagogical impact of AI integration:

• Incorporate structured competitive analysis frameworks that help students evaluate the feasibility, sustainability, and scalability of their AI-driven projects.

• Expand instructional activities focused on prompt formulation and iterative design, encouraging more effective and adaptive use of generative AI across disciplines.

• Integrate economic literacy and market validation components, equipping students with entrepreneurial skills relevant to AI-mediated innovation.

These recommendations aim to consolidate the role of generative AI as a transformative educational tool, capable of empowering learners to navigate complex socio-technical environments and actively contribute to the design of ethical, sustainable, and innovative solutions. In future implementations, complementary methods such as reflective journals or post-intervention focus groups could further enrich the pedagogical understanding of student experiences and provide deeper insight into the learning process.

From a broader perspective, this study contributes to the ongoing discourse on the digital transformation of education by highlighting the need for intentional instructional experiences that balance technical mastery with critical and creative engagement. The critical-constructivist lens adopted here offers a robust framework for understanding how AI can support—and transform—the processes of knowledge construction, underscoring the imperative that educational technologies serve pedagogical goals centered on autonomy, ethical reasoning, and social responsibility.

Future research should continue to explore how generative AI can be integrated into diverse educational contexts in ways that are not only technically effective, but also culturally relevant, ethically informed, and pedagogically transformative.

Author’s note

Some of the theoretical and methodological developments mentioned in this manuscript—particularly the operationalization of epistemic agency in AI-mediated learning environments—are part of ongoing research not yet published. Readers interested in these emerging frameworks are encouraged to contact the corresponding author for further details or collaboration inquiries.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

JS: Formal analysis, Writing – original draft, Writing – review & editing, Software. GF-E: Writing – review & editing, Data curation. JS-C: Visualization, Writing – review & editing. RC-Q: Data curation, Formal analysis, Software, Validation, Writing – original draft. JO-G: Formal analysis, Validation, Conceptualization, Investigation, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. The APC was funded by Tecnologico de Monterrey, Vicerrectory of Research and Technology Transfer.

Acknowledgments

The authors would like to thank the students and educators who participated in this study, as well as Tecnologico de Monterrey, Campus San Luis Potosí for their support in facilitating the research process. VIcerrectory of Research and technology transfer.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that Gen AI was used in the creation of this manuscript. This study involved the use of generative artificial intelligence (GenAI) tools, specifically ChatGPT, as part of the research design. However, no part of this manuscript was generated by AI; all text, analysis, and conclusions were developed by the authors. AI tools were only used within the study’s instructional framework to analyze student interaction with AI.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.1597249/full#supplementary-material

References

Alier, M., García-Peñalvo, F., and Camba, J. D. (2024). Generative artificial intelligence in education: from deceptive to disruptive. Int. J. Interact. Multimed. Artif. Intell. 8:5. doi: 10.9781/ijimai.2024.02.011

Crossref Full Text | Google Scholar

Apple, M., and Apple, M. W. (2004). Ideology and curriculum. New York: Routledge eBooks. doi: 10.4324/9780203487563

Crossref Full Text | Google Scholar

Apple, M. W. (1980). The other side of the hidden curriculum: correspondence theories and the labor process. J. Educ. 162, 47–66. doi: 10.1177/002205748016200105

Crossref Full Text | Google Scholar

Balacheff, N. (1994). Didactique et intelligence artificielle. Recherches didactique Math. 14, 9–42.

Google Scholar

Baskerville, R., and Pries-Heje, J. (1999). Grounded action research: a method for understanding IT in practice. Account. Manag. Inf. Technol. 9, 1–23. doi: 10.1016/S0959-8022(98)00017-4

Crossref Full Text | Google Scholar

Benenson, J., and Bryan, T. K. (2024). The classroom laboratory. En Routledge eBooks. 49–65. doi: 10.4324/9781032671253-6

Crossref Full Text | Google Scholar

Bhaskar, R. (1979). The possibility of naturalism. 3rd Edn. London: Routledge.

Google Scholar

Billett, S. (2006). Relational interdependence between social and individual agency in work and working life. Mind Cult. Act. 13, 53–69. doi: 10.1207/s15327884mca1301_5

Crossref Full Text | Google Scholar

Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y., Nerantzi, C., Moore, S., et al. (2024). The manifesto for teaching and learning in a time of generative AI: a critical collective stance to better navigate the future. Open Praxis 16, 487–513. doi: 10.55982/openpraxis.16.4.777

Crossref Full Text | Google Scholar

Brousseau, G. (2002). Theory of Didactical situations in Mathematics. In Mathematics education library. In Mathematics education library. Eds. N. Balacheff, M. Cooper, R. Sutherland, V. Warfield, (Dordrecht: Springer Dordrecht) 226–249. doi: 10.1007/0-306-47211-2

Crossref Full Text | Google Scholar

Bush, M. D., and Mott, J. D. (2009). The transformation of learning with technology: learner-centricity, content and tool malleability, and network effects. Educ. Technol. 49, 3–20. Available online at: http://www.jstor.org/stable/44429655

Google Scholar

Celik, I., Gedrimiene, E., Siklander, S., and Muukkonen, H. (2024). The affordances of artificial intelligence-based tools for supporting 21st-century skills: a systematic review of empirical research in higher education. Australas. J. Educ. Technol. 40, 19–38.

Google Scholar

Chavira-Quintero, R., and Olais-Govea, J. M. (2023). Analysis of content knowledge categories in preservice teachers when teaching the concept of number in preschool. Sustainability 15:3981. doi: 10.3390/su15053981

Crossref Full Text | Google Scholar

Cruz-Ramírez, S. R., García-Martínez, M., and Olais-Govea, J. M. (2022). NAO robots as context to teach numerical methods. Int. J. Interact. Des. Manuf. (IJIDeM) 16, 1337–1356. doi: 10.1007/s12008-022-01065-y

PubMed Abstract | Crossref Full Text | Google Scholar

Díaz de León-López, M. G., Velázquez-Sánchez, M. D. L., Sánchez-Madrid, S., and Olais-Govea, J. M. (2021). A simple approach to relating the optimal learning and the meaningful learning experience in students age 14–16. Information 12:276. doi: 10.3390/info12070276

Crossref Full Text | Google Scholar

Dimitriadou, E., and Lanitis, A. (2023). A critical evaluation, challenges, and future perspectives of using artificial intelligence and emerging technologies in smart classrooms. Smart Learn. Environ. 10:12. doi: 10.1186/s40561-023-00231-3

PubMed Abstract | Crossref Full Text | Google Scholar

Fenwick, T. (2015). Sociomateriality and learning: a critical approach. In: eds. D. Scott and E. Hargreaves. The SAGE handbook of learning.(London: SAGE), 83–93. Available online at: https://uk.sagepub.com/en-gb/eur/the-sage-handbook-of-learning/book242764

Google Scholar

Fleetwood, S. (2014). “Bhaskar and critical realism” in Oxford handbook of sociology, social theory, and organization studies Eds. P. S. Adler, P. D. Gay, G. Morgan, M. I. Reed. Oxford: Contemporary currents, 182–219.

Google Scholar

Freire, P. (1970). Pedagogy of the oppressed. New York: Herder and Herder.

Google Scholar

García-Martínez, M., Cruz-Ramírez, S. R., and Olais-Govea, J. M. (2022). Encryption activity to improve higher-order thinking in engineering students. Int. J. Interact. Des. Manuf. (IJIDeM), 299–316. doi: 10.1007/s12008-021-00756-2

Crossref Full Text | Google Scholar

Giannakos, M., Azevedo, R., Brusilovsky, P., Cukurova, M., Dimitriadis, Y., Hernandez-Leo, D., et al. (2024). The promise and challenges of generative AI in education. Behav. Inf. Technol. 44, 2518–2544. doi: 10.1080/0144929X.2024.2394886

Crossref Full Text | Google Scholar

Giroux, H. A. (2010). Rethinking education as the practice of freedom: Paulo Freire and the promise of critical pedagogy. Policy Futures Educ. 8, 715–721. doi: 10.2304/pfie.2010.8.6.715

Crossref Full Text | Google Scholar

Hennessy, S. (1993). Situated cognition and cognitive apprenticeship: implications for classroom learning. Stud. Sci. Educ. 22, 1–41. doi: 10.1080/03057269308560019

Crossref Full Text | Google Scholar

Hetland, P. (2012). T. Fenwick and R. Edwards: Actor-Network Theory in Education. Nord. j. digit. lit. 7, 70–72. doi: 10.18261/issn1891-943x-2012-01-06

Crossref Full Text | Google Scholar

Hu, C. (2011). Computational thinking: what it might mean and what we might do about it. In Proceedings of the 16th annual joint conference on innovation and technology in computer science education (223–227).

Google Scholar

Johri, A. (2022). Augmented sociomateriality: implications of artificial intelligence for the field of learning technology. Res. Learn. Technol. 30. doi: 10.25304/rlt.v30.2642

Crossref Full Text | Google Scholar

Joyce, B., and Calhoun, E. (2024). Models of teaching. New York: Taylor & Francis. doi: 10.4324/9781003455370

Crossref Full Text | Google Scholar

Kim, M., and Adlof, L. (2024). Adapting to the future: ChatGPT as a means for supporting constructivist learning environments. TechTrends 68, 37–46. doi: 10.1007/s11528-023-00899-x

Crossref Full Text | Google Scholar

Kumar, M. (2006). Constructivist epistemology in action. J. Educ. Thought / Revue de La Pensée Éducative 40, 247–261. Available online at: http://www.jstor.org/stable/23767425

Google Scholar

Latour, B. (2005). Reassembling the social. In Oxford University Press eBooks. doi: 10.1093/oso/9780199256044.001.0001

Crossref Full Text | Google Scholar

Liao, J., Zhong, L., Zhe, L., Xu, H., Liu, M., and Xie, T. (2024). Scaffolding computational thinking with ChatGPT. IEEE Trans. Learn. Technol. 17, 1628–1642. doi: 10.1109/TLT.2024.3392896

Crossref Full Text | Google Scholar

Luckin, R., Holmes, W., Griffiths, M., and Forcier, L. B. (2016). Intelligence unleashed: an argument for AI in education. Pearson Education.

Google Scholar

McLaren, P. (2020). The future of critical pedagogy. Educ. Philos. Theory 52, 1243–1248. doi: 10.1080/00131857.2019.1686963

Crossref Full Text | Google Scholar

Mora-Ochomogo, I., Regis-Hernández, F., and Manuel Olais-Govea, J. (2021). Practice-based education in engineering addressing real-business problems amid the Covid-19 crisis. In 2021 the 2nd international conference on industrial engineering and industrial management (pp.63–70).

Google Scholar

Nellhaus, T. (1998). Signs, social ontology, and critical realism. J. Theory Soc. Behav. 28, 1–24. doi: 10.1111/1468-5914.00060

Crossref Full Text | Google Scholar

Newman, S. (2023). Situating children’s agency in grades 2 & 3 Ghanaian public schools’ classrooms: A sociomaterial perspective [Doctoral thesis, Université de Neuchâtel].

Google Scholar

Pendergast, D., Main, K., and Bahr, N. (2024). Teaching middle years. In London: Routledge eBooks. doi: 10.4324/9781003458586

Crossref Full Text | Google Scholar

Selwyn, N., Hillman, T., Eynon, R., Ferreira, G., Knox, J., Macgilchrist, F., et al. (2020). What’s next for ed-tech? Critical hopes and concerns for the 2020s. Learn. Media Technol. 45, 1–6. doi: 10.1080/17439884.2020.1694945

Crossref Full Text | Google Scholar

Tang, K. S., Cooper, G., Rappa, N., Cooper, M., Sims, C., and Nonis, K. (2024). A dialogic approach to transform teaching, learning & assessment with generative AI in secondary education: a proof of concept. Pedagogies 19, 493–503. doi: 10.1080/1554480X.2024.2379774

Crossref Full Text | Google Scholar

Verster, B., and van den Berg, C. (2022). Theorising with sociomateriality: interdisciplinary collaboration in socio-technical learning environments. Educ. Res. Soc. Change 11, 1–18. doi: 10.17159/2221-4070/2021/v11i2a3

Crossref Full Text | Google Scholar

Von Glasersfeld, E. (1989). Cognition, construction of knowledge, and teaching. Synthese 80, 121–140. doi: 10.1007/BF00869951

Crossref Full Text | Google Scholar

Voorhees, R. A., and Bedard-Voorhees, A. (2016). “Principles for competency-based education” in Instructional-design, theories and models, volume IV (New York: Routledge), 33–64. doi: 10.1145/3467967

Crossref Full Text | Google Scholar

Wertsch, J. V. (1991) Voices of the mind. Oxford: A sociocultural approach to mediated action

Google Scholar

Wu, Y. (2024). Revolutionizing learning and teaching: crafting personalized, culturally responsive curriculum in the AI era. Creat. Educ. 15, 1642–1651.

Google Scholar

Xia, L., Shen, K., Sun, H., An, X., and Dong, Y. (2025). Developing and validating the student learning agency scale in generative artificial intelligence (AI)-supported contexts. Educ. Inf. Technol. 41, 1–23. doi: 10.1007/s10639-024-13137-5

Crossref Full Text | Google Scholar

Yan, L., Greiff, S., Teuber, Z., and Gašević, D. (2024). Promises and challenges of generative artificial intelligence for human learning. Nat. Hum. Behav. 8, 1839–1850. doi: 10.1038/s41562-024-02004-5

PubMed Abstract | Crossref Full Text | Google Scholar

Yilmaz, R., and Yilmaz, F. G. K. (2023). The effect of generative AI-based tool use on students' computational thinking skills, programming self-efficacy and motivation. Comput. Educ. Artif. Intell. 4:100147. doi: 10.1016/j.caeai.2023.100147

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: generative artificial intelligence, computational thinking, AI-mediated learning, epistemic agency, critical constructivism, educational innovation

Citation: Sánchez Muñoz JA, Flores-Eraña G, Silva-Campos JM, Chavira-Quintero R and Olais-Govea JM (2025) GenAI as a cognitive mediator: a critical-constructivist inquiry into computational thinking in pre-university education. Front. Educ. 10:1597249. doi: 10.3389/feduc.2025.1597249

Received: 20 March 2025; Accepted: 12 August 2025;
Published: 10 September 2025.

Edited by:

Stamatios Papadakis, University of Crete, Greece

Reviewed by:

Soo Lee, American Institutes for Research, United States
Hongzhi (Veronica) Yang, The University of Sydney, Australia
Irenne Yuwono, Griffith University, Australia
Wati Sukmawati, Universitas Muhammadiyah Prof Dr Hamka, Indonesia

Copyright © 2025 Sánchez Muñoz, Flores-Eraña, Silva-Campos, Chavira-Quintero and Olais-Govea. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: José Manuel Olais-Govea, b2xhaXNAdGVjLm14

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.