- 1Centrum Católica Graduate Business School, Pontifical Catholic University of Peru, Lima, Peru
- 2Pontificia Universidad Católica del Perú CENTRUM Catolica, Lima, Peru
- 3San Ignacio de Loyola - Escuela ISIL, Lima, Peru
Introduction: Higher education institutions implementing Generative Artificial Intelligence (GenAI) often assume uniform student adoption; however, evidence shows substantial variation in how learners engage with AI technologies. To address this heterogeneity, this study develops and validates a student typology framework integrating Technological Pedagogical Content Knowledge (TPACK) and the Unified Theory of Acceptance and Use of Technology (UTAUT), moving beyond generic implementation toward targeted educational interventions in business education.
Methods: Using a hybrid theoretical–empirical approach, we analyzed data from 252 MBA students. The integrated TPACK–UTAUT framework was applied to identify distinct patterns of GenAI engagement and adoption, enabling the empirical derivation and validation of stable student profiles.
Results: Three profiles emerged: Explorers (11%), younger students who actively experiment with GenAI despite limited formal training; Moderates (68%), systematic learners who favor structured approaches; and Skeptics (21%), experienced professionals who require clear educational value prior to adoption. Significant differences were observed across performance expectancy (p < 0.001), age (p < 0.001), and TPACK integration (p < 0.001), with strong theoretical alignment (Cramer’s V = 0.276, p < 0.001).
Discussion: Rather than treating students as a homogeneous group, we propose a differentiated instructional framework comprising project-based exploration for Explorers, scaffolded training sequences for Moderates, and evidence-based case studies for Skeptics. This framework addresses the practical challenge of supporting diverse learners in GenAI-enabled business education. The typology provides a practical segmentation tool for institutional decision-making, including resource allocation and faculty development. By leveraging learning analytics, institutions may approximate student profiles to inform differentiated support strategies. The study advances theoretical understanding of GenAI adoption heterogeneity while offering a practical framework for designing differentiated educational interventions.
1 Introduction
It is undeniable the impact that GenAI has on people’s lives, from the increase in productivity derived from this technology. Several industries and economic sectors have reached new levels of efficiency by using GenAI on their operations (Ebert and Louridas, 2023; Korinek, 2023; Kshetri et al., 2024). Higher education has been one of the most benefited industries by incorporating GenAI into its core processes (Chan and Hu, 2023; Chiu, 2023).
While GenAI offers unprecedented opportunities to personalize learning pathways in higher education, there is considerable heterogeneity in how students perceive, adopt, and use these tools. This diversity presents a significant challenge for educational institutions seeking to implement effective AI-mediated teaching and learning strategies.
Labor empowerment of graduates and the development of soft and hard skills are relevant concerns for the future of work (Healy et al., 2022; Succi and Canovi, 2020), and GenAI has constituted a major driver for the workforce of the future (Firat, 2023). Companies and public agencies emphasize the need for a critical mass of professional talent to tackle the challenges of this century: sustainability, climate change, innovation, and economic productivity (Brundiers et al., 2021), and the role of GenAI appears critical in addressing these challenges.
Despite growing interest in the use of GenAI in higher education, there remains a notable gap in students’ capabilities and preparedness to adopt these tools effectively. This underscores the importance of examining the factors that shape their engagement with GenAI. Accordingly, this study addresses the following research question: What distinctive student profiles emerge concerning the use, attitudes, and expectations toward Generative AI in the context of business education? Identifying such profiles is essential for designing differentiated educational interventions that maximize the pedagogical potential of GenAI in diverse learning environments.
Therefore, our study contributes a foundational typology of GenAI adoption patterns among MBA students, a necessary prerequisite for designing and testing targeted interventions. Our objective is to identify and characterize empirically validated student profiles, not to validate the effectiveness of specific instructional strategies. The proposed pedagogical framework emerging from these profiles represents theoretically-grounded hypotheses that require subsequent empirical validation through intervention research.
This study is explicitly exploratory in nature, aiming to characterize empirically grounded student profiles rather than to test confirmatory causal hypotheses. Accordingly, we do not formulate formal hypotheses with accept–reject logic. Instead, drawing on UTAUT, TPACK, and prior research on adult and professional learners, we articulate analytic expectations that guide the direction of inquiry and interpretation of results. Specifically, we expected that student profiles would exhibit systematic differences in perceived educational value of GenAI, pedagogical-technological readiness, and individual background characteristics such as age and professional experience. Additionally, given the voluntary and high-agency nature of MBA education, we anticipated that Social Influence and Facilitating Conditions would play a more limited role in shaping Behavioral Intention relative to performance- and effort-related beliefs. These expectations serve as sensitizing assumptions to structure the analysis, not as confirmatory hypotheses, and are evaluated descriptively through profile comparisons and convergent validity patterns rather than formal hypothesis testing.
While scholars have highlighted the need for institutional policies and ethical guidelines to ensure the responsible integration of GenAI in higher education (Michel-Villarreal et al., 2023), several pedagogical challenges remain. Among them, the customization—or even hyper-personalization—of learning pathways stands out as a central concern. In contemporary higher education, there is increasing emphasis on crafting learning journeys tailored to individual students’ needs. GenAI offers promising opportunities in this regard. However, effective personalization requires a nuanced understanding of learners’ expectations, attitudes, backgrounds, and prior experience with AI tools. A uniform, “one-size-fits-all” approach may be operationally efficient from an educational management perspective, but it risks overlooking students’ diverse needs and readiness levels.
To strike a balance between personalization and scalability, we employ a hybrid theoretical-empirical approach that combines data-driven cluster analysis with established educational frameworks (TPACK and UTAUT). This methodology produces profiles that are both empirically robust and theoretically interpretable, a critical requirement for translating research findings into actionable institutional strategies. Our study bridges a critical gap in the literature by providing the first validated typology of GenAI adoption patterns in business education, establishing the foundation for targeted interventions to be designed and tested.
The remainder of this paper is organized as follows. Section 2 presents the study’s theoretical framework. The methodology is described in section 3, and the results in section 4. We discuss the results in Section 5 and provide our conclusions in Section 6, including implications, limitations, and future research agendas.
2 Theoretical framework
2.1 Technology adoption in education
Technology adoption in educational contexts presents unique challenges that distinguish it fundamentally from workplace and consumer environments. While traditional technology acceptance research has primarily focused on organizational settings where adoption decisions follow hierarchical mandates and clear productivity metrics (Venkatesh et al., 2003), educational environments operate under different dynamics characterized by individual agency, pedagogical considerations, and learning-centered objectives (Tondeur et al., 2017).
Educational contexts differ substantially from workplace environments in several critical dimensions. Unlike organizational technology adoption driven by efficiency and competitive advantage, educational technology integration must simultaneously address learning outcomes, pedagogical alignment, and student engagement (Prestridge, 2017). Academic institutions cannot simply mandate technology use; instead, they must create conditions where both educators and learners voluntarily embrace technologies that enhance learning experiences (Bahçivan et al., 2018). This fundamental difference necessitates technology acceptance models that account for education-specific factors such as pedagogical beliefs, learning objectives, and instructional contexts.
The complexity intensifies with the advent of GenAI technologies, which represent a paradigm shift from passive tools to active learning partners capable of content creation, personalized tutoring, and adaptive responses (Chan and Hu, 2023). Unlike previous educational technologies that primarily facilitated existing pedagogical practices, GenAI needs a fundamental reconsideration of teaching and learning processes (Michel-Villarreal et al., 2023). This transformative potential creates both unprecedented opportunities for personalized learning and significant challenges for institutional adoption strategies.
A critical gap emerges between technology acceptance and pedagogical integration. Traditional acceptance models predict whether individuals will use technology but fail to address how effectively they integrate it into educational practice (Cheng et al., 2022). Educational technology adoption requires dual-level success: initial acceptance decisions followed by meaningful pedagogical implementation. Research demonstrates that educators may accept technology yet struggle with effective classroom integration due to insufficient pedagogical knowledge or conflicting instructional beliefs (Arancibia-Herrera et al., 2024). This acceptance-integration gap suggests that educational technology adoption models must simultaneously address motivational factors influencing initial adoption and pedagogical factors determining implementation quality.
Student heterogeneity further complicates the adoption of educational technology. Unlike workplace environments with relatively homogeneous user populations sharing similar objectives and constraints, educational settings encompass diverse learners with varying technological backgrounds, learning preferences, and academic goals (Park and Lee, 2013). Research reveals significant individual differences in technology adoption patterns, influenced by factors such as prior experience, self-efficacy, generational characteristics, and disciplinary backgrounds (Li et al., 2019). This heterogeneity creates challenges for institutional technology implementation strategies that assume uniform user characteristics and adoption patterns.
Recent research highlights the inadequacy of one-size-fits-all approaches to educational technology adoption. Studies demonstrate that learner characteristics significantly moderate technology acceptance relationships, with factors such as technological anxiety, learning styles, and prior experience creating distinct adoption trajectories (Al-Adwan and Al-Debei, 2024). Furthermore, cultural and contextual factors introduce additional complexity, as technology adoption patterns vary across educational systems, disciplinary domains, and institutional contexts (Momenanzadeh et al., 2023).
The emergence of student typologies in educational technology research reflects growing recognition of adoption heterogeneity. Previous studies have identified distinct user profiles based on technology competence, adoption willingness, and integration success, ranging from enthusiastic early adopters to cautious skeptics requiring extensive support (Chan and Lee, 2023). However, these typologies often focus on single technologies or lack empirical validation across diverse educational contexts, limiting their practical applicability for institutional adoption strategies.
This complexity necessitates comprehensive theoretical frameworks that simultaneously address acceptance mechanisms and pedagogical integration processes while accounting for individual difference factors. Current research lacks validated models that integrate UTAUT technology acceptance theory with TPACK pedagogical frameworks and empirically-derived user typologies. Such integration is particularly critical for GenAI adoption, where the technology’s transformative potential requires both initial acceptance and sophisticated pedagogical implementation across diverse learner populations. Understanding these multifaceted adoption processes becomes essential for developing effective, differentiated implementation strategies that maximize GenAI’s educational potential while respecting learner diversity and institutional contexts.
2.2 UTAUT: understanding technology acceptance
The adoption of emerging technologies in higher education presents complex challenges that require robust theoretical frameworks to understand individual intentions and behaviors. The Unified Theory of Acceptance and Use of Technology (UTAUT), developed by Venkatesh et al. (2003), emerged as one of the most influential and widely validated frameworks for technology acceptance research. UTAUT synthesizes eight previous technology acceptance models, identifying four core constructs that predict usage intention and subsequent technology adoption behavior.
2.2.1 Performance expectancy (PE)
Represents the degree to which individuals believe that using a technology will enhance their performance in relevant activities. In educational contexts, this translates to perceptions that GenAI will improve learning outcomes, academic productivity, or assignment quality (Patterson et al., 2024; Sergeeva et al., 2025). For MBA students, Performance Expectancy encompasses beliefs about GenAI’s potential to enhance analytical capabilities, improve decision-making processes, and increase professional competence.
2.2.2 Effort expectancy (EE)
This variable measures the perceived ease of technology use, reflecting the cognitive and temporal resources required for effective adoption (Venkatesh et al., 2003). Educational research demonstrates that perceived complexity significantly influences student technology adoption decisions, particularly for sophisticated tools like GenAI that require learning new interaction paradigms (Al-Abdullatif, 2024; Tram, 2024). This construct proves especially relevant for diverse student populations with varying technological backgrounds.
2.2.3 Social influence (SI)
Captures individuals’ perceptions of social pressure from significant others to use technology. In educational settings, this includes influence from faculty, peers, and institutional authorities (Nikolic et al., 2024). Recent research reveals that social influence operates differently in educational versus workplace contexts, as academic environments emphasize individual agency and voluntary adoption rather than organizational mandates (Cabero-Almenara et al., 2024).
2.2.4 Facilitating conditions (FC)
Refer to the perceived availability of resources and support infrastructure necessary for technology use. This encompasses technical resources, institutional support, training availability, and organizational policies (Venkatesh et al., 2003). Educational research emphasizes that facilitating conditions significantly influence not only initial adoption but also sustained integration and effective usage patterns (Khlaif et al., 2024; Perez, 2024).
UTAUT’s application in educational GenAI research has demonstrated strong predictive validity across diverse contexts. Studies consistently show that Performance Expectancy and Effort Expectancy serve as primary predictors of GenAI adoption intentions among students and faculty (Patterson et al., 2024; Sergeeva et al., 2025). Furthermore, research reveals that pedagogical beliefs significantly moderate UTAUT relationships, with constructivist-oriented educators showing differential adoption patterns compared to transmissive-oriented instructors (Cabero-Almenara et al., 2024).
However, UTAUT’s limitations in educational contexts become apparent when considering pedagogical integration complexity underscoring the necessity of complementing UTAUT with frameworks that address pedagogical integration complexity, such as TPACK, to create comprehensive understanding of both acceptance decisions and subsequent educational implementation quality.
2.3 TPACK: pedagogical technology integration
The Technological Pedagogical Content Knowledge (TPACK) framework (Mishra and Koehler, 2006) provides a comprehensive theoretical foundation for understanding effective technology integration in educational practice. TPACK conceptualizes three fundamental knowledge domains—Content Knowledge (disciplinary mastery), Pedagogical Knowledge (instructional strategies), and Technological Knowledge (digital tool proficiency)—whose intersections create four additional domains: Technological Content Knowledge (TCK), Technological Pedagogical Knowledge (TPK), Pedagogical Content Knowledge (PCK), and the comprehensive TPACK integration.
TPACK’s relevance to GenAI adoption extends beyond simple technology use to encompass pedagogical transformation. Recent research has demonstrated that integrating GenAI within TPACK frameworks significantly improves learning outcomes through personalized educational experiences (Bautista et al., 2024; Hava and Babayiğit, 2024; Tram, 2024). The framework’s dynamic nature allows adaptation to various educational contexts while emphasizing the critical role of pedagogical considerations in technology adoption decisions (Greene and Jones, 2020; Wing Chan and Wai Tang, 2024).
Empirical evidence reveals that TPACK proficiency influences both educator confidence and student engagement with GenAI technologies. Studies across multiple educational contexts confirm that teachers with stronger TPACK integration demonstrate enhanced preparedness for GenAI adoption and create more interactive learning environments (Ait Ali et al., 2023; Alzahrani and Alzahrani, 2025; Yang et al., 2025). Furthermore, Ning et al.’s (2024) AI-TPACK framework identified six knowledge components that predict successful AI integration, providing a comprehensive assessment tool for large-scale implementation.
However, TPACK implementation faces significant challenges that limit its predictive power for technology adoption. Research indicates that pedagogical-technological knowledge can paradoxically create resistance to GenAI adoption, as educators with stronger pedagogical expertise may perceive greater privacy concerns and compatibility issues with existing instructional practices (Alzahrani and Alzahrani, 2025; Karataş and Ataç, 2024). Additionally, factors such as limited digital infrastructure, insufficient training, and resistance to methodological change impede effective TPACK implementation (Jammeh et al., 2024).
Student profile differentiation emerges as a critical consideration within TPACK applications. Research has identified distinct technology user profiles based on TPACK competency levels, ranging from “Technological Pioneers” with high digital tool confidence to “Pedagogical Content Knowledge Specialists” with strong disciplinary expertise but limited technological proficiency (Cheng et al., 2024). Additional studies reveal profiles such as “Balanced Integrators” maintaining equilibrium across TPACK dimensions and “TPACK Laggers” showing consistently low competency levels (Li et al., 2024; Trevisan and De Rossi, 2023).
Individual difference factors significantly influence TPACK-based GenAI adoption patterns. Generational differences between “digital natives” and “digital immigrants” create distinct technology interaction patterns (Hava and Babayiğit, 2024), while AI literacy levels determine users’ ability to effectively evaluate and implement GenAI tools (Al-Abdullatif, 2024). Furthermore, GenAI-related anxiety and technological self-efficacy serve as critical psychological factors influencing adoption willingness (Wang et al., 2024).
2.4 Integrating TPACK with UTAUT: a comprehensive framework for GenAI adoption
In educational settings, adoption involves both the motivational and contextual drivers that shape willingness to use a tool (captured by UTAUT) and the pedagogical capability to integrate the tool meaningfully into learning tasks (captured by TPACK). GenAI intensifies this dual requirement because it is not merely a delivery technology but a content-generating partner that can reshape learning activities. Therefore, an integrated TPACK-UTAUT lens is appropriate for identifying heterogeneous learner profiles that differ simultaneously in adoption drivers and pedagogical readiness, enabling segmentation that is interpretable for instructional design and institutional planning.
The limitations identified in both the TPACK and UTAUT frameworks underscore the need for theoretical integration to understand GenAI adoption in educational contexts comprehensively. While UTAUT effectively predicts technology acceptance intentions through motivational and contextual factors, it fails to capture the pedagogical complexity essential for meaningful educational technology integration (Al-Abdullatif, 2024; Tram, 2024). Conversely, TPACK provides a sophisticated understanding of pedagogical technology integration but lacks predictive power regarding initial adoption decisions and behavioral outcomes (Cheng et al., 2022; Wang et al., 2024). This complementary relationship suggests that integrated frameworks can address gaps inherent in single-theory approaches while providing a comprehensive understanding of educational technology adoption processes.
Theoretical integration occurs at multiple conceptual levels. UTAUT’s Performance Expectancy construct directly maps onto TPACK’s Technological Knowledge domain, as both address users’ confidence in technology’s capability to enhance educational outcomes (Ning et al., 2024). Similarly, Effort Expectancy aligns with the intersection of TPK, where ease of use influences pedagogical implementation decisions (Hava and Babayiğit, 2024; Yang et al., 2025). Facilitating Conditions in UTAUT corresponds to the organizational support necessary for effective TPACK development, particularly the institutional infrastructure required for comprehensive technology-pedagogy-content integration (Malusay et al., 2025; Shin et al., 2024).
The integration addresses critical theoretical gaps by creating a sequential adoption model where UTAUT constructs predict initial GenAI acceptance decisions, while TPACK dimensions explain subsequent pedagogical integration quality (Alzahrani and Alzahrani, 2025). This dual-phase approach recognizes that educational technology adoption involves both willingness to engage with technology (UTAUT domain) and capability to integrate it meaningfully into pedagogical practice (TPACK domain). Recent empirical evidence supports this integration, with studies demonstrating that pedagogical beliefs significantly moderate the relationships between UTAUT and TPACK development trajectories, while technology acceptance factors influence the latter (Cabero-Almenara et al., 2024; Karataş and Ataç, 2024).
Individual difference factors emerge as critical moderators in the integrated framework. Student characteristics such as age, professional experience, prior training, and pedagogical orientations influence both UTAUT acceptance processes and TPACK integration patterns (Li et al., 2024; Trevisan and De Rossi, 2023). These individual differences create heterogeneous adoption trajectories that necessitate differentiated implementation strategies (Cheng et al., 2024). The integrated framework suggests that successful GenAI adoption requires alignment between individual acceptance factors (UTAUT) and pedagogical competency development (TPACK), with misalignment creating barriers to effective implementation (Bautista et al., 2024; Oved and Alt, 2025).
The practical implications of integration extend beyond theoretical advancements to inform institutional adoption strategies. Rather than treating acceptance and integration as separate processes, the integrated framework suggests coordinated interventions that simultaneously address motivational factors influencing adoption decisions and pedagogical factors determining implementation quality (Greene and Jones, 2020; Jammeh et al., 2024). This approach enables the prediction not only of who will adopt GenAI technologies but also of how effectively they will integrate them into educational practice, providing crucial guidance for resource allocation and support system design.
The TPACK-UTAUT integration thus provides a comprehensive theoretical foundation for understanding the complexity of GenAI adoption while addressing the heterogeneous needs of diverse student populations (Park and Lee, 2013). This integrated approach forms the basis for empirically identifying distinct student profiles that exhibit different patterns of technology acceptance and pedagogical integration, enabling the development of differentiated educational interventions that maximize GenAI’s transformative potential in business education contexts.
2.5 Student profiles and differentiated adoption patterns
The adoption of GenAI in education is inherently heterogeneous, shaped by diverse student backgrounds, technological proficiencies, and pedagogical needs (Chan and Hu, 2023; Michel-Villarreal et al., 2023). Unlike homogeneous workplace settings, educational environments encompass learners with varying levels of digital literacy, disciplinary expertise, and attitudes toward innovation (Li et al., 2024; Park and Lee, 2013). For instance, younger “digital natives” may exhibit higher exploratory behaviors with GenAI, while experienced professionals often prioritize evidence-based utility (Al-Adwan and Al-Debei, 2024; Yang et al., 2025). This heterogeneity challenges institutional strategies that assume uniform adoption pathways, necessitating frameworks that account for multidimensional differences in technology acceptance and pedagogical integration (Celik, 2023; Cheng et al., 2024).
Prior research has proposed student typologies based on technology adoption (e.g., “innovators” vs. “laggards”; Rogers, 2003) or TPACK competency, like “Technological Pioneers” vs. “Pedagogical Specialists” (Trevisan and De Rossi, 2023). However, these classifications often suffer from two key limitations: (a) they focus on either acceptance (UTAUT) or integration (TPACK), failing to bridge the gap between intent and practice (Cheng et al., 2022; Karataş and Ataç, 2024); and (b) they lack empirical validation in GenAI contexts, relying instead on theoretical proxies (Ning et al., 2024). For example, while Cabero-Almenara et al. (2024) identified educator profiles using UTAUT, their model omitted TPACK’s pedagogical dimensions, limiting actionable insights for classroom implementation. Similarly, TPACK-based typologies (e.g., Hava and Babayiğit, 2024) rarely incorporate behavioral predictors like performance expectancy or social influence, despite their proven role in adoption decisions (Venkatesh et al., 2012).
Therefore, our study integrates UTAUT and TPACK to derive empirically grounded student profiles that reflect both adoption drivers and pedagogical readiness. This approach aligns with recent calls for hybrid frameworks in educational technology research (Murphy et al., 2024; Wang et al., 2024). By anchoring profiles in both UTAUT (e.g., behavioral intention) and TPACK (e.g., technological pedagogical knowledge), we offer a scalable model for personalized interventions, advancing beyond the “one-size-fits-all” paradigm (Tondeur et al., 2017; Tram, 2024).
3 Methodology
3.1 Sample, variables, and calibration
The sample consisted of 252 valid observations collected from MBA students enrolled in a Peruvian business school. After implementing systematic data quality controls and removing incomplete responses, all cases retained complete information across critical variables, ensuring robust analytical foundations for cluster validation. Demographic characteristics revealed a gender distribution of 69.3% men and 30.7% women. Regarding employment sectors, 90.8% of respondents worked in the private sector, 6.4% in the public sector, and 2.8% in non-profit organizations. In terms of professional experience, 59.0% had more than 10 years of work experience, 21.9% between 6 and 10 years, and 19.1% between 3 and 5 years. Participants came from diverse industries, with the most represented being mining (15.5%), education (11.6%), and banking and financial services (10.8%). The “Other” category, which includes sectors such as consulting, healthcare, and manufacturing, accounted for 35.5% of the sample.
Age distribution ranged from 25 to 55 years (M = 36.9, SD = 7.8), providing adequate representation across different generational cohorts relevant to technology adoption research. This demographic diversity, particularly the variation in age and professional experience, provides a robust foundation for identifying differentiated student profiles in the context of GenAI adoption, as these variables are theoretically central to TPACK framework applications (Cheng et al., 2024).
Therefore, sampling adequacy for cluster analysis was confirmed through multiple criteria: (1) sample size exceeding recommended minimum of 2k observations per expected cluster (Hair et al., 2019), (2) complete data across all critical UTAUT constructs, and (3) sufficient variance in key theoretical variables to enable meaningful profile differentiation. The final sample of 252 participants provides adequate statistical power for robust cluster validation and cross-validation analyses. Instrument calibration was conducted through a comprehensive validation process. All UTAUT constructs demonstrated acceptable to excellent internal consistency (Cronbach’s α ranging from 0.731 to 0.946), with Performance Expectancy (α = 0.901), Facilitating Conditions (α = 0.894), and Motivation (α = 0.946) showing robust reliability. Social Influence reliability was improved from 0.685 to 0.731 through systematic item analysis and removal of one poorly performing item, consistent with psychometric best practices (Nunnally and Bernstein, 1994).
3.2 Data processing
Data preparation followed a systematic process to ensure analytical rigor and theoretical consistency. After comprehensive data cleaning and outlier detection, we implemented variable recoding to align the scales, ensuring that higher values consistently indicated more positive attitudes across all constructs.
We employed a hybrid approach that combines empirical clustering with theoretical validation, addressing recent criticisms regarding the interpretability of educational technology clustering research (Park and Lee, 2013; Cheng et al., 2024). Rather than relying solely on data-driven methods, we implemented theoretically informed classification based on established TPACK and UTAUT literature. Composite variables for all UTAUT constructs were constructed using validated psychometric procedures, with items aggregated using mean scores after confirming acceptable internal consistency.
Profile classification combined quantile-based cutoff points with theoretical criteria. Skeptics were identified as individuals with low Performance Expectancy (≤33rd percentile) and higher age/professional experience (≥50th percentile), reflecting experienced professionals requiring evidence-based persuasion. Explorers exhibited high Performance Expectancy (≥66th percentile), absence of formal AI training, and younger age (≤50th percentile), consistent with digital native adoption patterns. Moderates comprised the remaining participants, representing the theoretically expected intermediate profile.
To balance interpretability and empirical grounding, we employed a two-stage hybrid clustering approach. First, theory-informed anchors were defined using quantile-based thresholds on key variables (Performance Expectancy, age, professional experience, and prior AI training) to establish interpretable archetypes grounded in UTAUT, TPACK, and adult-learning theory. Second, k-medoids clustering was applied to standardized adoption-related variables to examine whether the empirical data structure converged with these theory-anchored profiles. K-medoids was selected for its robustness to outliers and suitability for Likert-type measures. Cluster results were not treated as “true” categories; instead, convergence between theoretical and empirical classifications was assessed using χ2 tests and Cramer’s V. Under this design logic, the low average silhouette score reflects overlapping adoption tendencies rather than sharply bounded learner types.
3.2.1 Cluster validation
Our validation strategy addressed methodological rigor through four complementary dimensions: theoretical-empirical concordance, statistical significance testing, internal cluster quality metrics, and stability analysis (Murphy et al., 2024). K-medoids clustering was selected over k-means due to robustness to outliers and interpretable cluster representatives, valuable in educational research contexts.
Theoretical-empirical validation examined concordance between theoretical profiles and empirical clusters using contingency table analysis. Cramer’s V measured association strength, while Chi-square tests provided significance testing. Internal validation included silhouette analysis and bootstrap stability analysis using 100 resampling iterations. We employed Euclidean distance on standardized variables and multiple random initializations to ensure robustness. This comprehensive approach ensures both statistical soundness and theoretical interpretability for educational intervention design.
3.3 Study variables
Our variable selection and operationalization were grounded in the TPACK model (Mishra and Koehler, 2006) and the UTAUT framework (Venkatesh et al., 2003), both of which were applied to GenAI adoption in higher education. Table 1 presents the comprehensive variable structure organized by theoretical construction.
The UTAUT constructs directly address technology acceptance mechanisms, with Performance Expectancy mapping to TPACK’s Technological Knowledge and Facilitating Conditions representing organizational support for technology-pedagogy integration. The TPACK Index provides a composite measure of integrated technological, pedagogical, and content knowledge readiness, while control variables enable the examination of individual difference factors influencing adoption patterns. This variable structure allows the robust testing of theoretical relationships while providing practical insights for designing educational interventions, thereby balancing theoretical rigor with practical applicability in business education contexts.
4 Results
4.1 Sample characteristics and descriptive statistics
Our sample consisted of 252 MBA students, all of whom provided complete responses across all variables. Table 2 presents descriptive statistics demonstrating adequate variance for cluster analysis.
The sample exhibited substantial heterogeneity across theoretical dimensions. Age ranged from 25 to 55 years with 59.0% reporting >10 years professional experience. Regarding GenAI exposure, 44.8% received AI training while usage frequency showed considerable variation. UTAUT constructs demonstrated adequate variance, with Performance Expectancy showing moderate levels and the TPACK index revealing substantial individual differences, supporting heterogeneous technological-pedagogical readiness.
4.2 Psychometric validation of UTAUT constructs
Table 3 presents the internal consistency coefficients and item statistics for each theoretical construct.
All constructs achieved acceptable to excellent reliability (α = 0.731–0.946). Correlation analysis confirmed discriminant validity with no correlations exceeding 0.70, indicating distinct theoretical dimensions rather than redundant measurements.
4.3 Theoretical profile validation and empirical clustering
Following established practices in educational technology research, we employed a hybrid validation approach combining theoretical profile classification with empirical cluster analysis. This methodology addresses recent criticisms regarding the interpretability of purely data-driven clustering in educational contexts while maintaining empirical rigor. Theoretical profile classification based on TPACK-UTAUT criteria identified three distinct groups: Skeptics (21.0%, n = 53), Moderates (67.9%, n = 171), and Explorers (11.1%, n = 28). Table 4 presents the cross-tabulation between theoretical profiles and empirical clusters derived from k-medoids analysis.
The theoretical-empirical concordance analysis revealed moderate but statistically significant alignment (Cramer’s V = 0.276, χ2 p < 0.001). This level of concordance is consistent with educational technology adoption research, which typically finds that student profiles exhibit continuous rather than discrete characteristics. Notably, Explorers showed strong clustering (82.1% in Empirical Cluster 1), while Skeptics preferred Empirical Cluster 3 (52.8%), suggesting meaningful empirical validation of theoretical typologies.
4.4 Profile characterization and differential analysis
The three validated profiles exhibited distinct patterns across demographic, technological, and pedagogical dimensions. Table 5 presents comprehensive descriptive statistics for each profile, and Table 6 reports the results of statistically significant tests for key differentiating variables.
Explorers (n = 28, 11.1%) emerged as the youngest group (M = 29.8 years, SD = 3.4) with the highest Performance Expectancy scores (M = 2.74, SD = 0.59). Despite having no formal AI training (0%), they demonstrated strong technological confidence and the highest TPACK integration scores (M = 3.12, SD = 0.31). This profile aligns with theoretical expectations for digital natives who exhibit high intrinsic motivation for technology exploration despite limited formal preparation.
Moderates (n = 171, 67.9%) represented the largest and most balanced group, with intermediate age (M = 36.8 years, SD = 8.2) and moderate Performance Expectancy (M = 2.10, SD = 0.58). Notably, 45% had received AI training, suggesting a more structured approach to technology adoption. Their TPACK scores (M = 2.90, SD = 0.35) reflected systematic but cautious integration patterns consistent with pragmatic adopter characteristics.
Skeptics (n = 53, 21.0%) comprised the oldest participants (M = 42.2 years, SD = 6.1) with the lowest Performance Expectancy (M = 1.50, SD = 0.50). Paradoxically, this group had the highest rate of formal AI training (67.9%), suggesting that increased exposure to AI training enhanced critical evaluation rather than uncritical acceptance. Their TPACK scores (M = 2.74, SD = 0.35) indicated cautious but informed technological integration.
Statistical significance testing revealed substantial differences in profile across key theoretical dimensions (Table 6). Performance Expectancy, Effort Expectancy, Age, and TPACK Index all showed highly significant differences (p < 0.001) with large effect sizes (η2 > 0.14), confirming meaningful profile differentiation. Notably, Social Influence and Facilitating Conditions showed no significant differences, suggesting these factors operate similarly across profiles while other dimensions drive differentiation.
4.5 TPACK integration patterns and pedagogical implications
The differential analysis revealed distinct patterns of TPACK integration across student profiles, with significant implications for instructional design and institutional support strategies. Table 7 presents the framework for differentiated pedagogical interventions based on empirically validated profile characteristics, whereas Table 8 details the specific attributes characterizing each student profile.
TPACK integration analysis revealed fundamentally different approaches to technology-pedagogy relationships across profiles. Explorers demonstrated the strongest Technology Knowledge (TK) orientation, with TPACK-Behavioral Intention correlations of r = 0.68, suggesting that technological competence directly drives implementation intentions. This profile benefits from advanced project-based learning where technological exploration enhances both pedagogical understanding and content mastery. These trajectories represent theoretical expectations based on the integrated TPACK-UTAUT framework and require empirical validation through intervention studies.
Moderates exhibited balanced TPACK integration with moderate correlations across all domains (TPACK-BI r = 0.45, TPACK-Motivation r = 0.52). This suggests a systematic rather than intuitive approach to technology integration, supporting structured and progressive training methods. Their pedagogical framework emphasizes scaffolded learning experiences that gradually build competence across technological, pedagogical, and content knowledge dimensions simultaneously.
Skeptics demonstrated the strongest PCK orientation, with TPACK-Usage Frequency correlations of only r = 0.28, suggesting that technological proficiency does not automatically translate into usage intentions. This profile requires evidence-based persuasion strategies that demonstrate clear connections between GenAI tools and existing pedagogical expertise, emphasizing educational value over technological sophistication.
Institutional resource allocation implications are substantial. Explorers require minimal support but maximum freedom, suggesting an investment in cutting-edge tools and innovation spaces. Moderates benefit from structured support systems, including communities of practice and systematic training programs. Skeptics require intensive, personalized support that focuses on pedagogical relevance and demonstrates tangible educational outcomes.
We report an adoption propensity pattern derived from self-reported Behavioral Intention, Motivation, and Risk Perception. This summary is intended to inform implementation planning and execution but does not represent observed learning gains, performance outcomes, or causal effects of any instructional strategy. Outcome validation requires longitudinal and/or experimental designs using direct measures.
Cross-profile learning opportunities emerge from these differential patterns. Explorers can serve as peer mentors for technological exploration, Moderates can facilitate systematic implementation communities, and Skeptics can provide critical evaluation and quality assurance perspectives. This suggests mixed-profile learning environments may optimize overall institutional adoption outcomes.
The empirically validated framework provides evidence-informed guidance for business schools implementing GenAI technologies, enabling targeted interventions that respect student diversity while maximizing pedagogical effectiveness across different adoption profiles.
4.6 Model validation and robustness checks
To ensure the reliability and generalizability of our findings, we conducted comprehensive validation analyses examining both the stability of the clustering solution and the robustness of the theoretical framework. As shown in Table 9, bootstrap validation across 50 resampled datasets confirmed exceptional clustering stability, consistently identifying exactly 3 clusters in 100% of samples. Moreover, cross-validation using 70–30 train-test split demonstrated adequate generalizability with no statistically significant distribution differences (χ2 p = 0.249).
Construct validity assessment revealed differential patterns across UTAUT constructs. Performance Expectancy (r = 0.696) and Effort Expectancy (r = 0.676) demonstrated strong convergent validity with Behavioral Intention. In contrast, Social Influence (r = 0.069) and Facilitating Conditions (r = 0.110) exhibited unexpectedly low convergent validity, likely reflecting the unique characteristics of the educational context, where individual agency predominates. The average silhouette score (0.093) indicates weak separation among clusters and certain overlap between profiles. Accordingly, the profiles reflect the continuous rather than discrete nature of student adoption patterns in educational contexts (Murphy et al., 2024). Unlike biological taxonomies or consumer market segments, learner profiles exist on multidimensional continua with substantial overlap; a finding that itself has theoretical significance for personalized education strategies. Such overlap does not negate the practical value of segmentation for targeted support; however, it does constrain classification precision and motivates a conservative interpretation of between-profile differences.
Sensitivity analysis demonstrated robustness to methodological decisions. Adjusting quantile thresholds by ±5% resulted in minimal changes to classification (with a maximum variation of 3.9%). Theoretical validation confirmed convergent validity with established TPACK literature, with profiles aligning closely to Cheng et al.’s (2024) typologies and Trevisan and De Rossi’s (2023) categories.
This comprehensive validation approach addresses methodological criticisms while maintaining practical utility, supporting both statistical robustness and theoretical meaningfulness for replication across business education contexts.
5 Discussion
5.1 Theoretical contributions
This study makes three fundamental theoretical contributions to the field of educational technology. First, we tested the full set of core UTAUT constructs in a GenAI-in-MBA context using psychometrically reliable measures. Results show that PE and EE align strongly with Behavioral Intention, whereas SI and FC show limited explanatory power in this high-agency setting, suggesting that UTAUT’s relative construct weights may be context-dependent. While previous research has applied partial UTAUT models to educational technologies (Al-Abdullatif, 2024; Wang et al., 2024), our study uniquely validates all core constructs—including the previously underexplored Social Influence and Facilitating Conditions—with psychometrically robust measures (α ranging from 0.731 to 0.946). This contribution addresses a critical gap identified by Celik (2023) regarding the need for validated theoretical frameworks in AI-TPACK research.
Second, we advance TPACK-UTAUT theoretical integration by demonstrating how technology acceptance constructs map onto pedagogical knowledge domains in GenAI contexts. Our empirical findings reveal that Performance Expectancy strongly correlates with Technological Knowledge development (r = 0.696), while Facilitating Conditions influence the TPK intersection. This integration extends Ning et al.’s (2024) AI-TPACK framework by providing quantitative validation of theoretical relationships previously supported only through qualitative evidence.
However, an essential methodological caveat must be acknowledged: the cross-sectional nature of our design precludes causal inference regarding the relationships between profiles and their characteristic attitudes. We cannot definitively determine whether low Performance Expectancy precedes and shapes skeptical dispositions, or whether skeptical orientations lead individuals to report lower Performance Expectancy. This bidirectionality is particularly evident in the paradoxical finding that Skeptics exhibit the highest rate of formal AI training (67.9%). This pattern could reflect either: (a) that formal training enhances critical evaluation capabilities, leading trained individuals to adopt more skeptical positions; or (b) that individuals with inherently skeptical dispositions actively seek formal preparation to validate their concerns. Similarly, the relationship between age, professional experience, and profile membership may be reciprocal rather than unidirectional. While our theoretical framework positions these as antecedents, longitudinal research is necessary to establish temporal precedence and directional effects. This limitation does not invalidate our typology (validated profiles remain actionable for institutional planning), but it does constrain our ability to make prescriptive claims about the mechanisms underlying profile formation.
The third contribution relies on our hybrid validation approach, which addresses recent methodological criticisms in educational technology clustering research (Murphy et al., 2024). By combining theoretical profile classification with empirical clustering validation (Cramer’s V = 0.276, p < 0.001), we demonstrate that student typologies can be both theoretically grounded and empirically robust. This methodology offers a replicable framework for future research examining technology adoption heterogeneity in educational contexts, moving beyond purely data-driven approaches that often lack interpretability.
The emergence of three distinct profiles (Skeptics, Moderates, and Explorers) extends established technology adoption typologies (Cheng et al., 2024; Trevisan and De Rossi, 2023) specifically to GenAI contexts while maintaining theoretical coherence with broader TPACK literature. Importantly, our findings demonstrate that GenAI adoption follows similar patterns to previous educational technology integrations, but with unique characteristics that reflect the transformative nature of Generative AI capabilities.
Our empirical analysis provides robust evidence for theoretical integration of TPACK and UTAUT frameworks in GenAI adoption contexts. Validation across construct validity, cross-validation, and bootstrap stability establishes methodological standards for educational technology research.
Construct validation revealed theoretically meaningful patterns illuminating technology adoption complexity. Strong convergent validity between Performance Expectancy and Behavioral Intention (r = 0.696) confirms core UTAUT expected patterns while extending to GenAI contexts. However, unexpectedly low convergent validity for Social Influence (r = 0.069) and Facilitating Conditions (r = 0.110) suggests necessary contextual adaptations. In graduate business education, where students exhibit high individual agency, institutional pressures operate differently than workplace settings where UTAUT was originally validated (Alzahrani and Alzahrani, 2025).
TPACK integration patterns reveal differential adoption pathways. Explorers demonstrate Technology Knowledge dominance with strong TPACK-Behavioral Intention correlations (r = 0.68), suggesting technological competence drives implementation. Moderates exhibit balanced integration, while Skeptics show PCK orientation requiring evidence-based validation. These patterns confirm that effective GenAI integration requires differentiated approaches respecting diverse TPACK competency profiles.
The limited roles of Social Influence (r = 0.069) and Facilitating Conditions (r = 0.110) in predicting Behavioral Intention, and their failure to differentiate significantly across profiles, represent both a finding and a limitation that warrant deeper theoretical consideration. This pattern suggests that UTAUT constructs operate differently in voluntary educational contexts characterized by high individual agency than in workplace settings where the model was initially validated (Venkatesh et al., 2003). In MBA education, where students are self-directed professionals pursuing voluntary learning, organizational pressures (SI) and institutional infrastructure (FC) appear less determinative of adoption decisions than individual performance beliefs (PE) and ease-of-use perceptions (EE).
This context-specific adaptation has relevant implications for the generalizability of our integrated TPACK-UTAUT framework. While the framework successfully identifies distinct profiles based on PE, EE, age, and TPACK integration, the muted role of SI and FC suggests that in contexts with stronger institutional mandates (e.g., K-12 settings with prescribed technology use) or collectivist cultural orientations (where social influence carries greater weight), the relative importance of UTAUT constructs may shift. Future research should test whether our framework maintains structural validity across contexts in which institutional pressure and social norms play a more determinative role. This does not invalidate our findings for the MBA context—indeed, understanding which constructs drive adoption in high-agency environments is itself a valuable contribution—but it does suggest the need for context-sensitive calibration when applying the framework to other educational settings.
5.2 Implications for instructional design
Our empirically validated profiles provide actionable guidance for instructional designers and educational technologists implementing GenAI technologies in business education contexts. The differentiated framework moves beyond generic “best practices” to offer evidence-based strategies tailored to specific student characteristics and needs.
For Explorers (11.1%), the optimal instructional strategy emphasizes project-based learning with advanced GenAI applications. These students, characterized by high Performance Expectancy (M = 2.74) and technological confidence, despite lacking formal training, benefit from autonomy and access to cutting-edge tools. Practical implementations include beta-testing new GenAI platforms, leading peer workshops, and developing innovative applications to solve business problems. Educational institutions should position Explorers as “innovation champions” who can bridge the gap between technological possibilities and practical applications, consistent with Rogers’ (2003) diffusion of innovations theory regarding early adopters.
For Moderates (67.9%), representing the largest group, structured progressive training emerges as the optimal approach. Their balanced TPACK scores (M = 2.90) and moderate Performance Expectancy (M = 2.10) suggest systematic learning preferences. Effective strategies include scaffolded GenAI workshops, collaborative learning communities, and clear implementation frameworks. Given that 45% have received AI training, building upon existing knowledge through structured advancement pathways optimizes learning outcomes. This approach aligns with the findings of Malusay et al. (2025) regarding the effectiveness of systematic professional development in technology integration.
Among Skeptics (21.0%), evidence-based persuasion strategies are most effective. Their high formal training rates (67.9%) combined with low Performance Expectancy (M = 1.50) indicate that exposure alone is insufficient—they require demonstrated pedagogical value. Successful interventions include case-based learning showcasing measurable educational outcomes, ROI demonstrations, and gradual proof-of-concept implementations. This group’s extensive professional experience (M = 42.2 years) represents valuable institutional knowledge that, when properly engaged, can enhance overall GenAI implementation quality.
Cross-profile learning opportunities emerge as a significant institutional strategy. Mixed-profile learning environments enable Explorers to provide technological inspiration, Moderates to facilitate systematic implementation, and Skeptics to contribute critical evaluation. This approach maximizes institutional learning while respecting individual adoption preferences, consistent with social learning theory (Bandura, 1977) and recent findings on peer influence in educational technology adoption (Yang et al., 2025).
The adaptive instructional framework also addresses temporal considerations. As GenAI technologies evolve rapidly, the framework provides flexibility for profile-specific adaptation strategies. Explorers can pilot emerging tools, Moderates can systematize successful innovations, and Skeptics can validate educational effectiveness—creating a sustainable innovation ecosystem within academic institutions.
5.3 Institutional policy implications
Our findings have substantial implications for institutional leaders responsible for GenAI implementation in business education contexts. The empirically validated profiles provide a foundation for evidence-based policy development and resource allocation strategies that move beyond one-size-fits-all approaches.
Resource allocation strategies should reflect profile-specific needs and potential returns on investment. Our predictive modeling suggests differential expected trajectories: Explorers (48.8% expected trajectory), Moderates (30.6%), and Skeptics (10.7%). However, institutional value extends beyond adoption rates. Skeptics, despite lower adoption likelihood, provide crucial quality assurance and pedagogical validation that enhances overall implementation effectiveness. We recommend a 60-25-15 resource distribution favoring Moderates (largest group with steady ROI), followed by intensive support for Skeptics, and minimal but high-quality resources for Explorers.
Faculty development programs require differentiated approaches aligned with faculty member profiles. Traditional training models assuming homogeneous faculty needs prove inadequate given our findings. Instead, institutions should implement tiered support systems: advanced innovation labs for Explorers, structured progression pathways for Moderates, and evidence-based demonstration programs for Skeptics. This approach optimizes training effectiveness while respecting diverse faculty learning preferences and backgrounds.
Technology infrastructure decisions should accommodate profile diversity rather than seeking universal solutions. Explorers require access to cutting-edge, experimental GenAI tools and sufficient freedom for innovation. Moderates benefit from stable, well-supported platforms with clear implementation guidance. Skeptics need robust evidence of educational effectiveness and minimal risk exposure. Portfolio approaches to technology selection, rather than institutional standardization, may optimize overall adoption outcomes.
Assessment and evaluation frameworks must account for differential adoption patterns when measuring GenAI implementation success. Traditional metrics focusing solely on usage rates or user satisfaction scores may misrepresent institutional effectiveness. Our framework suggests evaluation approaches that recognize quality of integration, pedagogical effectiveness, and long-term sustainability across different user profiles. Skeptics may show lower usage but higher-quality implementation, while Explorers may demonstrate high innovation but variable pedagogical alignment.
Change management strategies should leverage profile strengths rather than treating diversity as an obstacle. Explorers can serve as proof-of-concept leaders, Moderates can facilitate systematic rollout, and Skeptics can provide critical feedback for continuous improvement. This approach transforms profile heterogeneity into institutional advantage, consistent with organizational learning theory (Senge, 1990) and recent research on institutional technology adoption (Shin et al., 2024).
Ethical considerations also require profile-sensitive approaches. Skeptics’ concerns often center on pedagogical integrity and educational quality; valuable perspectives for the responsible implementation of AI. Institutional policies should incorporate these concerns as quality assurance mechanisms, rather than obstacles to overcome, ensuring the ethical integration of GenAI that maintains educational standards while embracing innovation.
5.4 Future research directions
This study opens several promising avenues for future research based on our validated TPACK-UTAUT framework and empirically grounded typology. Longitudinal studies are the highest priority. Our cross-sectional design captures profiles at a single time point, but GenAI adoption likely follows developmental trajectories. Future research should investigate profile stability over time, migration patterns between profiles, and factors facilitating positive transitions, particularly whether Skeptics migrate toward Moderate positions with appropriate support.
Cross-cultural validation is essential for establishing generalizability beyond Peruvian business education contexts. Cultural factors may significantly influence technology adoption patterns and the importance of the UTAUT construct. Validating our typology across different educational systems could reveal whether GenAI adoption patterns are universal or context-specific, informing global educational technology policy.
Intervention effectiveness research represents a crucial next step for translating descriptive findings into prescriptive guidance. Experimental studies comparing profile-tailored versus generic training approaches could quantify the practical benefits of differentiated instructional strategies, providing concrete evidence for institutional investment in personalized development programs.
Learning analytics applications could leverage our framework for real-time profile identification and adaptive support provision. Machine learning approaches using institutional data could contribute to classifying students into profiles, enabling dynamic support customization and advancing practical applicability. Methodological innovations building on our hybrid validation approach could further advance educational technology research through ensemble methods, advanced clustering algorithms, and novel validation approaches accounting for learning environment characteristics.
6 Conclusion
Our study contributes to the growing body of knowledge on GenAI integration in higher education by identifying and characterizing distinct student profiles in business education. Through cluster analysis of 251 MBA students, we identified three different profiles (Skeptics, Explorers, and Moderates) each exhibiting unique patterns in their approach to GenAI adoption and usage.
The findings demonstrate that student engagement with GenAI is not uniform but rather follows distinct patterns influenced by factors such as age, professional experience, and technological attitudes. The Skeptics cluster, characterized by more extensive professional experience, shows a more critical yet pragmatic approach to GenAI integration. The Explorers cluster demonstrates high enthusiasm and willingness to experiment with multiple platforms despite limited formal training. The Moderates cluster, representing the largest group, exhibits a balanced approach with higher levels of formal training but moderate platform usage.
These findings have significant implications for educational practice. First, they challenge the conventional one-size-fits-all approach to technology integration in business education. Instead, they suggest the need for differentiated instructional strategies that acknowledge and accommodate these distinct profiles. Second, they highlight the importance of considering student characteristics and attitudes when designing GenAI-enhanced learning experiences. Third, they provide a framework for developing targeted support systems and resources that address the specific needs and concerns of each profile.
6.1 Limitations and boundary conditions
Several limitations of this study should be pointed out. The sample is limited to MBA students from a single institution in Peru, which may limit the generalizability of the findings to other educational contexts or geographic regions. Additionally, the cross-sectional nature of the data collection prevents us from observing how these profiles might evolve as students GenAIn more exposure to GenAI tools.
A fundamental boundary condition of this research is the absence of learning outcome measures as dependent variables. Our study characterizes profiles based on readiness (TPACK), intentions (UTAUT), and attitudes (Performance Expectancy). Still, it does not empirically validate whether profile-tailored instructional strategies yield superior learning results compared to generic approaches. The proposed differentiated pedagogical framework (Table 7) presents theoretically grounded hypotheses derived from our validated typology, not empirically tested interventions. Experimental validation of these strategies, including measuring critical thinking development, improving assignment quality, and assessing long-term skill retention, constitutes the essential next phase of this research program.
Methodologically, our cross-sectional design limits causal inference; we cannot determine whether observed attitudes precede or follow profile membership. The low silhouette score (0.093) reflects substantial overlap among profiles, indicating that student adoption patterns lie on multidimensional continua rather than discrete categories, a finding with implications for the precision of automated profile classification systems. Additionally, the differential performance of UTAUT constructs in our high-agency MBA context (with Social Influence and Facilitating Conditions showing limited predictive power) suggests that our integrated framework may require calibration when applied to contexts with stronger institutional mandates or different cultural orientations.
In conclusion, our findings suggest that successful integration of GenAI in business education requires a nuanced understanding of student profiles and the development of targeted educational interventions. As GenAI continues to transform business practice and education, this understanding becomes increasingly crucial for preparing students for future professional challenges. The identification of these distinct profiles provides a foundation for more effective and personalized approaches to GenAI integration in business education. Therefore, future research should address the limitations of the study by conducting longitudinal studies to track profile evolution, cross-cultural studies to validate these profiles in different contexts, and intervention studies to evaluate the effectiveness of profile-based instructional strategies. Additionally, investigating how these profiles relate to learning outcomes and professional success could provide valuable insights for curriculum design and educational policy.
Data availability statement
The datasets presented in this article are not readily available because the authors are not authorized to share publicly the data with third parties. However, data can be shared privately upon request. Requests to access the datasets should be directed to bm51bmV6bUBwdWNwLmVkdS5wZQ==.
Ethics statement
The studies involving humans were approved by CEI-CCSSHHyA, Pontificia Universidad Católica del Perú (PUCP). The studies were conducted in accordance with the local legislation and institutional requirements. The Ethics Committee/Institutional Review Board waived the requirement of written informed consent for participation from the participants or the participants’ legal guardians/next of kin because data collection was performed in a digital format, so we included the consent at the beginning of our survey instrument.
Author contributions
NN: Formal analysis, Writing – review & editing, Writing – original draft, Conceptualization. RF-C: Funding acquisition, Writing – review & editing, Project administration, Conceptualization, Validation. GC-M: Investigation, Writing – review & editing, Writing – original draft, Formal analysis, Data curation.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This project was funded by the Dirección de Fomento de la Investigación at the PUCP through grant DFI-2023-PI1068.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was used in the creation of this manuscript. We made amendments to the manuscript’s redaction because none of the authors are native English speakers. We utilized ChatGPT to enhance the manuscript’s fluency. We also used Grammarly to correct grammar issues.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Ait Ali, D., El Meniari, A., El Filali, S., Morabite, O., Senhaji, F., and Khabbache, H. (2023). Empirical research on technological pedagogical content knowledge (TPACK) framework in health professions education: a literature review. Med. Sci. Educ. 33, 791–803. doi: 10.1007/S40670-023-01786-Z,
Al-Abdullatif, A. M. (2024). Modeling teachers’ acceptance of generative artificial intelligence use in higher education: the role of AI literacy, intelligent TPACK, and perceived trust. Educ. Sci. 14:1209. doi: 10.3390/educsci14111209
Al-Adwan, A. S., and Al-Debei, M. M. (2024). The determinants of Gen Z’s metaverse adoption decisions in higher education: integrating UTAUT2 with personal innovativeness in IT. Educ. Inf. Technol. 29, 7413–7445. doi: 10.1007/s10639-023-12080-1
Alzahrani, A., and Alzahrani, A. (2025). Comprendiendo la adopción de ChatGPT en universidades: el impacto del TPACK y UTAUT2 en los docentes. Rev. Iberoam. Educ. Distancia 28, 37–58. doi: 10.5944/ried.28.1.41498
Arancibia-Herrera, M., Castro-Appelhanz, M. J., and Sigerson, A. (2024). Relationships between conceptions and ICT competencies: study of nine didactic sequences of Chilean teachers. Educ. Res. 50:e260125. doi: 10.1590/S1678-4634202450260125es
Bahçivan, E., Güneş, E., and Üstündağ, M. (2018). A comprehensive model covering prospective teachers’ technology use: the relationships among self, teaching and learning conceptions and attitudes. Technol. Pedagog. Educ. 27, 399–416. doi: 10.1080/1475939X.2018.1479296
Bautista, A., Estrada, C., Jaravata, A. M., Mangaser, L. M., Narag, F., Soquila, R., et al. (2024). Preservice teachers’ readiness towards integrating AI-based tools in education: a TPACK approach. Educ. Process. Int. J. 13, 40–68. doi: 10.22521/edupij.2024.133.3
Brundiers, K., Barth, M., Cebrián, G., Cohen, M., Diaz, L., Doucette-Remington, S., et al. (2021). Key competencies in sustainability in higher education—toward an agreed-upon reference framework. Sustain. Sci. 16, 13–29. doi: 10.1007/s11625-020-00838-2
Cabero-Almenara, J., Palacios-Rodríguez, A., Loaiza-Aguirre, M. I., and Andrade-Abarca, P. S. (2024). The impact of pedagogical beliefs on the adoption of generative AI in higher education: predictive model from UTAUT2. Front. Artif. Intell. 7:1497705. doi: 10.3389/frai.2024.1497705,
Celik, I. (2023). Towards intelligent-TPACK: an empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Comput. Hum. Behav. 138:107468. doi: 10.1016/j.chb.2022.107468
Chan, C. K. Y., and Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 20:43. doi: 10.1186/s41239-023-00411-8
Chan, C. K. Y., and Lee, K. K. (2023). The AI generation gap: are gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their gen X and millennial generation teachers? Smart Learn. Environ. 10:60. doi: 10.1186/s40561-023-00269-3
Cheng, S. L., Chang, J. C., and Romero, K. (2022). Are pedagogical beliefs an internal barrier for technology integration? The interdependent nature of teacher beliefs. Educ. Inf. Technol. 27, 5215–5232. doi: 10.1007/s10639-021-10835-2
Cheng, J., Hall, J. A., Wang, Q., and Lei, J. (2024). More than high, medium, and low: pre-service teacher TPACK profiles and intentions to teach with technology. Educ. Inf. Technol. 29, 24387–24413. doi: 10.1007/s10639-024-12793-x
Chiu, T. K. (2023). The impact of generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney. Interact. Learn. Environ. 32, 6187–6203. doi: 10.1080/10494820.2023.2253861
Ebert, C., and Louridas, P. (2023). Generative AI for software practitioners. IEEE Softw. 40, 30–38. doi: 10.1109/ms.2023.3265877
Firat, M. (2023). What chatGPT means for universities: perceptions of scholars and students. J. Appl. Learn. Teach. 6, 57–63. doi: 10.37074/jalt.2023.6.1.22
Greene, M. D., and Jones, W. M. (2020). Analyzing contextual levels and applications of technological pedagogical content knowledge (TPACK) in English as a second language subject area: a systematic literature review. Educ. Technol. Soc. 23, 75–88. Available at: https://www.jstor.org/stable/26981745
Hair, J. F., Black, W. C., Babin, B. J., and Anderson, R. E. (2019). Multivariate data analysis (8th ed.). Cengage.
Hava, K., and Babayiğit, Ö. (2024). Exploring the relationship between teachers’ competencies in AI-TPACK and digital proficiency. Educ. Inf. Technol. 30, 3491–3508. doi: 10.1007/s10639-024-12939-x
Healy, M., Hammer, S., and McIlveen, P. (2022). Mapping graduate employability and career development in higher education research: a citation network analysis. Stud. High. Educ. 47, 799–811. doi: 10.1080/03075079.2020.1804851
Jammeh, A. L. J., Karegeya, C., and Ladage, S. (2024). Application of technological pedagogical content knowledge in smart classrooms: views and its effect on students’ performance in chemistry. Educ. Inf. Technol. 29, 9189–9219. doi: 10.1007/s10639-023-12158-w
Karataş, F., and Ataç, B. A. (2024). When TPACK meets artificial intelligence: analyzing TPACK and AI-TPACK components through structural equation modelling. Educ. Inf. Technol. 30, 8979–9004. doi: 10.1007/s10639-024-13164-2
Khlaif, Z. N., Ayyoub, A., Hamamra, B., Bensalem, E., Mitwally, M. A., Ayyoub, A., et al. (2024). University teachers’ views on the adoption and integration of generative AI tools for student assessment in higher education. Educ. Sci. 14:1090. doi: 10.3390/educsci14101090
Korinek, A. (2023). Generative AI for economic research: use cases and implications for economists. J. Econ. Lit. 61, 1281–1317. doi: 10.1257/jel.20231736
Kshetri, N., Dwivedi, Y. K., Davenport, T. H., and Panteli, N. (2024). Generative artificial intelligence in marketing: applications, opportunities, challenges, and research agenda. Int. J. Inf. Manag. 75:102716. doi: 10.1016/j.ijinfomgt.2023.102716
Li, Y., Garza, V., Keicher, A., and Popov, V. (2019). Predicting high school teacher use of technology: pedagogical beliefs, technological beliefs and attitudes, and teacher training. Technol. Knowl. Learn. 24, 501–518. doi: 10.1007/s10758-018-9355-2
Li, N., Ling Lau, K., Liang, Y., and Sing Chai, C. (2024). Pre-service foreign language teachers’ TPACK preparation for technology integration: what are the profiles and key drivers? Asia Pac. J. Educ., 1–17. doi: 10.1080/02188791.2024.2415392
Malusay, J. T., Cortes, S. T., Ontolan, J. M., Englis, T. P., Pagaran, G. M., and Dizon, R. L. (2025). Strengthening STEM education through a professional development program on enhancing teachers’ TPACK in selected calculus topics. Front. Educ. 9:1487350. doi: 10.3389/feduc.2024.1487350
Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., and Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sci. 13:856. doi: 10.3390/educsci13090856
Mishra, P., and Koehler, M. J. (2006). Technological pedagogical content knowledge: a framework for teacher knowledge. Teachers College Record 108, 1017–1054. doi: 10.1111/j.1467-9620.2006.00684.x
Momenanzadeh, M., Mashhadi, A., Gooniband Shooshtari, Z., and Arus-Hita, J. (2023). English as a foreign language preservice teachers’ technological pedagogical content knowledge: a quantitative comparative study. J. Res. Appl. Linguist. 14, 161–172. doi: 10.22055/rals.2023.44207.3100
Murphy, K., López-Pernas, S., and Saqr, M. (2024). “Dissimilarity-based cluster analysis of educational data: a comparative tutorial using R” in Learning analytics methods and tutorials: a practical guide using R. eds. S. López-Pernas and M. Saqr (Cham: Springer Nature Switzerland), 231–283.
Nikolic, S., Wentworth, I., Sheridan, L., Moss, S., Duursma, E., Jones, R. A., et al. (2024). A systematic literature review of attitudes, intentions and behaviours of teaching academics pertaining to AI and generative AI (GenAI) in higher education: an analysis of GenAI adoption using the UTAUT framework. Australas. J. Educ. Technol. 40, 56–75. doi: 10.14742/ajet.9643
Ning, Y., Zhang, C., Xu, B., Zhou, Y., and Wijaya, T. T. (2024). Teachers’ AI-TPACK: exploring the relationship between knowledge elements. Sustainability 16:978. doi: 10.3390/su16030978
Oved, O., and Alt, D. (2025). Teachers’ technological pedagogical content knowledge (TPACK) as a precursor to their perceived adopting of educational AI tools for teaching purposes. Educ. Inf. Technol. 30, 14095–14121. doi: 10.1007/s10639-025-13371-5
Park, O. C., and Lee, J. (2013). “Adaptive instructional systems” in Handbook of research on educational communications and technology. eds. D. Jonassen and M. Driscoll (Abingdon: Routledge), 647–680.
Patterson, A., Frydenberg, M., and Basma, L. (2024). Examining generative artificial intelligence adoption in academia: a UTAUT perspective. Issues. Inf. Syst. 25, 238–251. doi: 10.48009/3_iis_2024_119
Perez, R. C. L. (2024). AI in higher education: faculty perspective towards artificial intelligence through UTAUT approach. Ho Chi Minh City Open Univ. J. Sci. 14, 32–50. doi: 10.46223/hcmcoujs.soci.en.14.4.2851.2024
Prestridge, S. (2017). Examining the shaping of teachers’ pedagogical orientation for the use of technology. Technol. Pedagog. Educ. 26, 367–381. doi: 10.1080/1475939X.2016.1258369
Senge, E. M. (1990). The fifth discipline: The art and practice of the learning organization. New York: Doubleday/Currency.
Sergeeva, O. V., Zheltukhina, M. R., Shoustikova, T., Tukhvatullina, L. R., Dobrokhotov, D. A., and Kondrashev, S. V. (2025). Understanding higher education students’ adoption of generative AI technologies: an empirical investigation using UTAUT2. Contemp. Educ. Technol. 17:ep571. doi: 10.30935/cedtech/16039
Shin, C., Gi Seo, D., Jin, S., Hwa Lee, S., and Je Park, H. (2024). Educational technology in the university: a comprehensive look at the role of a professor and artificial intelligence. IEEE Access 12, 116727–116739. doi: 10.1109/ACCESS.2024.3447067
Succi, C., and Canovi, M. (2020). Soft skills to enhance graduate employability: comparing students and employers’ perceptions. Stud. High. Educ. 45, 1834–1847. doi: 10.1080/03075079.2019.1585420
Tondeur, J., van Braak, J., Ertmer, P. A., and Ottenbreit-Leftwich, A. (2017). Understanding the relationship between teachers’ pedagogical beliefs and technology use in education: a systematic review of qualitative evidence. Educ. Technol. Res. Dev. 65, 555–575. doi: 10.1007/s11423-016-9481-2
Tram, N. H. M. (2024). Unveiling the drivers of AI integration among language teachers: integrating UTAUT and AI-TPACK. Comput. Sch. 42, 100–120. doi: 10.1080/07380569.2024.2441155
Trevisan, O., and De Rossi, M. (2023). Preservice teachers’ dispositions for technology integration: common profiles in different contexts across Europe. Technol. Pedagog. Educ. 32, 191–204. doi: 10.1080/1475939X.2023.2169338
Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). User acceptance of information technology: toward a unified view. MIS Quarter. 27, 425–478. doi: 10.2307/30036540
Venkatesh, V., Thong, J. Y., and Xu, X. (2012). Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Quarter. 36, 157–178. doi: 10.2307/41410412
Wang, K., Ruan, Q., Zhang, X., Fu, C., and Duan, B. (2024). Pre-service teachers’ GenAI anxiety, technology self-efficacy, and TPACK: their structural relations with behavioral intention to design GenAI-assisted teaching. Behav. Sci. 14:373. doi: 10.3390/bs14050373,
Wing Chan, K., and Wai Tang, W. (2024). Evaluating English teachers’ artificial intelligence readiness and training needs with a TPACK-based model. World J. Engl. Lang. 15, 129–145. doi: 10.5430/wjel.v15n1p129
Yang, Y., Xia, Q., Liu, C., and Chiu, T. K. F. (2025). The impact of TPACK on teachers’ willingness to integrate generative artificial intelligence (GenAI): the moderating role of negative emotions and the buffering effects of need satisfaction. Teach. Teach. Educ. 154:104877. doi: 10.1016/j.tate.2024.104877
Keywords: adaptive learning, business education, Generative Artificial Intelligence, learning analytics, TPACK, UTAUT
Citation: Nunez NA, Fernández-Concha R and Cornejo-Meza G (2026) One size does not fit all: customizing teaching and learning strategies with Generative AI. Front. Educ. 11:1699228. doi: 10.3389/feduc.2026.1699228
Edited by:
Yu-Chun Kuo, Rowan University, United StatesReviewed by:
Jorge Mendonça, School of Health of the Polytechnic Institute of Porto, PortugalAlsaeed Alshamy, Sultan Qaboos University, Oman
Copyright © 2026 Nunez, Fernández-Concha and Cornejo-Meza. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Nicolas A. Nunez, bm51bmV6bUBwdWNwLmVkdS5wZQ==
Giuliana Cornejo-Meza3