- 1College of Art & Design, Nanning University, Nanning, China
- 2Department of Smart Experience Design, Graduate School of Techno Design, Kookmin University, Seoul, Republic of Korea
- 3Culture Design Lab, Graduate School of Techno Design, Kookmin University, Seoul, Republic of Korea
- 4Department of Global Convergence, Kangwon National University, Chuncheon-si, Republic of Korea
Introduction: As Artificial Intelligence-Generated Content (AIGC) tools (e.g., ChatGPT for writing assistance, Midjourney for image generation) diffuse into educational settings, their adoption reflects a psychological interplay between functional appraisals and ethical concerns. This study proposes and tests a dual-path model integrating the Technology Acceptance Model (TAM) with Protection Motivation Theory (PMT), incorporating Perceived Ethical Concern (PEC) and Moral Sensitivity (MS).
Methods: Ten latent constructs were modeled: Perceived Severity (PS), Perceived Vulnerability (PV), Self-Efficacy (SE), Response Efficacy (RE), Perceived Ease of Use (PEOU), Perceived Usefulness (PU), PEC, MS, Behavioral Intention (BI), and Continuance Intention (CI). Using structural equation modeling based on data from 589 respondents with prior AIGC experience, we evaluated 14 hypotheses.
Results: The results support the TAM pathway: PU and PEOU positively predict BI and CI. Meanwhile, PMT components operate indirectly; RE and SE influence appraisals by elevating PU and mitigating PEC, whereas PS and PV elevate PEC. PEC shows a significant negative effect on BI and an indirect negative impact on CI. Notably, this negative PEC-BI association is more pronounced among individuals with higher MS.
Discussion: The findings extend psychological accounts of AI tool adoption by jointly modeling moral appraisal and functional value in educational contexts. Furthermore, the study offers actionable implications for platform design and policy, suggesting that improving usability and efficacy cues while increasing ethical transparency can foster responsible, sustained use.
1 Introduction
In recent years, with the continuous breakthroughs in Generative Artificial Intelligence technologies and the maturation of large-scale model infrastructures, Artificial Intelligence-Generated Content (AIGC) tools have rapidly permeated the field of education (Zawacki-Richter et al., 2019). Tools such as ChatGPT, Notion AI, Writesonic, and Copilot are widely used for various tasks including text generation, language refinement, question answering, code assistance, and even academic writing (Dwivedi et al., 2021). Their powerful semantic generation capabilities not only significantly enhance teaching and learning efficiency but also break down the high barriers of professional expertise and time investment traditionally associated with content creation (Ocen et al., 2025). In particular, within higher education settings, both students and teachers have integrated AIGC tools into daily educational practices at an unprecedented pace, using them for assignment support, thesis writing, project proposal generation, courseware design, translation and editing, and self-directed learning (Lo, 2023). This phenomenon is part of a broader digital transformation in education, where AI is not merely a tool but a catalyst for reshaping learning ecosystems and skill requirements (Zarifis and Efthymiou, 2022; Naidoo, 2023). As highlighted in recent studies, the integration of AI necessitates a shift from traditional instruction to adaptive, technology-enhanced pedagogical frameworks (Rughiniș et al., 2025). Meanwhile, some educational platforms have also begun actively integrating AIGC-based interface functions to enhance interactivity and personalization in teaching (Smutny and Schreiberova, 2020). These developments also reflect the growing need for sustainable, scalable, and learner-centered educational infrastructures that leverage AI responsibly over the long term.
On one hand, AIGC tools are seen as powerful assistive technologies that can alleviate student workload and improve instructional quality. On the other hand, concerns are mounting over issues such as potential overreliance, unverifiable information authenticity, uncontrollable content generation, and unclear accountability. These issues raise serious questions regarding the legitimacy, rationality, and ethical boundaries of AIGC use in educational contexts (Floridi and Chiriatti, 2020; Cotton et al., 2024). For instance, some educators have reported academic misconduct stemming from students’ use of AIGC tools for writing assignments. Students themselves have noted that reliance on these tools weakens independent thinking. In response to increasing incidents of AIGC-enabled cheating, several universities have even temporarily adjusted their assessment mechanisms (Smutny and Schreiberova, 2020). Although a growing body of research has explored the functional features and application value of AIGC technologies, most studies remain focused on usability and technology adoption pathways, with limited attention to users’ ethical concerns or moral perceptions in decision-making processes (Karran et al., 2024). In real-world educational settings, users often base their decision to adopt AIGC tools not solely on perceived usefulness but on a psychological weighing of potential risks. For example, a student may believe that AIGC significantly improves writing efficiency Perceived Usefulness (PU), yet simultaneously worry that its use could lead to plagiarism Perceived Ethical Concern (PEC). This coexistence of tool efficacy and moral risk renders the adoption decision-making process more complex and fraught with uncertainty (Ravšelj et al., 2025).
While most existing studies have employed the Technology Acceptance Model (TAM) or the Unified Theory of Acceptance and Use of Technology (UTAUT) to explore user behavior, these models primarily focus on technological factors such as PU and Perceived Ease of Use (PEOU) and their effects on Behavioral Intention (BI). However, they fall short in explaining users’ cognitive conflicts and behavioral responses when faced with ethical dilemmas (Ghimire and Edwards, 2024). In contrast, the Protection Motivation Theory (PMT) emphasizes individuals’ cognitive appraisals of threats and coping strategies. It has been widely applied in research on health behavior, security behavior, and AI-related ethical risk, demonstrating strong explanatory power for risk-related cognition (Shrivastava, 2025). Nevertheless, an integrated behavioral model that combines TAM and PMT—while incorporating ethical cognition and individual moral traits—remains lacking, especially for systematically explaining users’ adoption mechanisms and behavioral patterns regarding AIGC tools in educational contexts.
To address this gap, this study constructs a dual-pathway structural model by integrating TAM and PMT, which incorporates both a “functional cognition pathway” and an “ethical cognition pathway.” The model introduces PEC as a key ethical cognition variable and Moral Sensitivity (MS) as a moderating variable, aiming to investigate users’ cognitive mechanisms, psychological trade-offs, and BI when engaging with AIGC tools in educational settings (Chenoweth et al., 2009; Yang, 2024). Specifically, this study seeks to achieve the following three objectives:
1. To examine how threat appraisal and coping cognition influence users’ perceptions of the usefulness of AIGC tools and their recognition of ethical issues;
2. To analyze how both the technological acceptance pathway and the ethical risk pathway jointly affect users’ BI;
3. To investigate how MS moderates the impact of ethical cognition on user behavior.
Through these objectives, this study aims to contribute to theory by addressing the lack of ethical considerations in existing technology adoption models, and to practice by offering ethically informed strategies for educational platforms and policy makers seeking to promote sustainable and inclusive adoption of AI technologies in education.
2 Literature review
2.1 Research progress on AIGC tools in the field of education
From the functional perspective, numerous studies have demonstrated that AIGC tools hold significant potential as learning aids. Chen found that students using AIGC tools during writing exercises significantly improved their structural expression and task completion efficiency (Chen et al., 2024). Lukas noted that AIGC systems provide learners with real-time feedback, rewriting suggestions, and translation support, thereby alleviating language barriers and cognitive burdens (Jürgensmeier and Skiera, 2024). Crompton and Burke further emphasized that AIGC technologies enhance classroom interaction and promote personalized instruction, showing high adaptability and scalability, particularly in higher education and open learning environments (Crompton and Burke, 2024). Across these studies, PU and PEOU have consistently been validated as key cognitive factors influencing students’ BI, thus reinforcing the core theoretical logic of TAM.
However, as AIGC tools become increasingly embedded in critical teaching activities—such as course writing, academic translation, and assessment completion—associated ethical concerns have become more prominent. First, the risk of academic misconduct has significantly increased. Existing research indicates that students often struggle to distinguish between original and AI-generated content, leading to frequent incidents of plagiarism and academic dishonesty (Plata et al., 2023). Second, AIGC may diminish students’ critical thinking abilities. Zhai (2022) argued that overreliance on tool-generated content can lead to a “cognitive outsourcing” effect, impeding the development of independent cognitive construction and expressive capabilities. In addition, concerns related to unclear content authenticity, algorithmic opacity, and ambiguous accountability have emerged as critical ethical risks for both educators and platform administrators (Ateeq et al., 2024). For instance, Yakubu et al. (2025), in a UTAUT-based study on college students’ adoption of AIGC writing tools, included only “social influence” and “facilitating conditions” as key predictors, without considering negative cognitive variables such as ethical concern. From an ethical perspective, several studies have attempted to introduce MS as a key moderating variable in user behavior models, to capture the intensity of individuals’ psychological responses to ethical issues. For instance, Vance et al. (2012) found that MS significantly moderated users’ BI when facing information leakage risks in the context of information security behavior. Liu et al. (2021), in a study on AI algorithm transparency, noted that individuals with high MS are more inclined to respond ethically to algorithmic bias. A summary of representative literature on AIGC applications in education is provided in Table 1.
2.2 Theoretical foundations of user adoption behavior: TAM and PMT
To understand AIGC adoption in education, this study integrates TAM and PMT—two key models explaining how users balance functional benefits with perceived ethical risks in decision-making. These theoretical perspectives are particularly valuable for understanding responsible technology adoption in sustainable educational environments. Originally proposed by Davis (1989), TAM is one of the most widely used frameworks in the information systems domain for predicting user behavior. It posits that two cognitive evaluations—PU and PEOU—are the key determinants of BI to use a new technology (Davis, 1989). PU refers to the belief that using the technology will improve one’s task performance, while PEOU reflects the perceived simplicity of the technology’s operation. Together, these perceptions shape users’ positive evaluations and jointly influence their adoption decisions. Owing to its parsimonious structure and strong predictive validity, TAM has been widely applied in diverse fields such as educational technology, healthcare systems, and e-government (Scherer et al., 2019; ElKheshin and Saleeb, 2020; Wang et al., 2022). In studies related to AIGC, scholars have confirmed the significant impact of PU and PEOU on user adoption. For example, Li (2023) identified PU as one of the strongest predictors of adoption intention among university students using AI-based writing tools; moreover, PEOU was found to positively influence PU, thereby validating the chained causal structure within the TAM framework.
Despite its emphasis on technological performance evaluations in shaping adoption decisions, TAM pays insufficient attention to users’ psychological defense mechanisms and ethical evaluation processes when facing emerging technologies (Islam et al., 2014). This limitation becomes particularly salient in the context of AIGC, which involves complex issues such as moral judgment, responsibility attribution, and value conflicts (Ajibade, 2018). To address this theoretical gap, this study introduces PMT as the foundation for the risk perception pathway. Initially developed by Rogers to explain individuals’ protective behaviors in response to health threats (Floyd et al., 2000), PMT has since been widely applied to areas such as information security (Sommestad et al., 2015), data privacy (Boerman et al., 2021), and AI technology adoption (Park et al., 2024). The core of PMT involves two cognitive stages—threat appraisal and coping appraisal—through which individuals assess risks and decide whether to adopt protective behaviors. In education, these appraisals influence whether AIGC tools are seen as supporting sustainable and equitable learning. Threat appraisal includes Perceived Severity (PS) and Perceived Vulnerability (PV), while coping appraisal comprises Self-Efficacy (SE) and Response Efficacy (RE), reflecting users’ confidence in handling risks and the perceived effectiveness of coping strategies.
Previous research has demonstrated the predictive validity of PMT in explaining user avoidance, resistance or behavioral adjustment in response to technological risks. For example, Park et al. (2024) found that in AI-based virtual service contexts, both PS and PV significantly influenced users’ adoption attitudes and BI, while SE and RE enhanced users’ perceptions of strategic effectiveness and increased their willingness to adopt such technologies. In the cybersecurity domain, Dodge et al. (2023) also identified SE and RE as key predictors of users’ adoption of recommended protective behaviors. Moreover, Su et al. (2022), in the tourism industry context, noted that PS and PV not only shaped practitioners’ risk awareness but also affected their SE and RE, which in turn strengthened their professional resilience and behavioral adjustment capabilities. Integrating PMT with TAM helps construct a comprehensive behavioral model that encompasses both positive motivations (functional adoption pathway) and negative motivations (risk defense pathway). In the context of educational applications of AIGC tools, users may simultaneously hold high functional expectations and ethical or risk-related concerns. Relying solely on TAM is insufficient to fully capture the internal psychological conflicts and dynamic trade-offs users experience (Hsiao and Tang, 2024). More importantly, the four cognitive antecedents in PMT may not only directly influence users’ risk perceptions but also indirectly affect their evaluations of the tool’s functionality, thereby impacting their BI (Hsu and Silalahi, 2024). This potential chain mechanism—from risk perception to functional evaluation to behavioral response—offers a theoretically grounded and practically relevant framework that surpasses TAM in explaining actual user behavior, especially when applied to sustainable technology adoption strategies in education. Accordingly, this study builds upon the primary pathway of TAM by further incorporating the four cognitive antecedents of PMT as predictors of PU and PEC. This dual-pathway adoption behavior model not only extends the explanatory scope of traditional technology acceptance theories but also provides a more systematic theoretical foundation for understanding user behavior in ethically sensitive educational settings involving AIGC tools (Hsu and Silalahi, 2024).
2.3 The role of ethical cognition and moral psychology in technology adoption
In the field of technology acceptance and user behavior research, ethical dimensions have long been situated at the periphery. Particularly in the context of educational artificial intelligence tools, users’ concerns over potential moral risks, value conflicts, and normative uncertainties are often simplified as “cognitive burdens” or “usage barriers.” However, as ethical issues surrounding artificial intelligence technologies gain increasing attention from both society and academia, a growing number of scholars have begun to focus on the mechanisms of ethical cognition during technology adoption, attempting to incorporate “ethical judgment” into behavioral intention prediction models (Kwon et al., 2020). Among these concepts, PEC—a risk-oriented ethical cognition—refers to an individual’s subjective perception of moral conflict, ambiguous responsibility, system manipulativeness, and potential adverse consequences during technology use. Initially prominent in research on data privacy, algorithmic fairness, and AI transparency, PEC has since been introduced into studies of user acceptance of AI systems. In the context of educational AI, Sain and Lawal (2024) found that students’ recognition of ethical risks associated with content generation tools significantly and negatively predicted their usage intentions, with heightened sensitivity observed in high-risk scenarios such as academic writing and course assessments. Specific manifestations of PEC in AIGC applications include lack of content originality, unclear attribution of responsibility, model bias risks, increased student dependency, and unbalanced evaluation mechanisms by instructors. Hsiao and Tang (2024) argued that the challenges posed by AIGC in educational settings are not merely technical but constitute a fundamental disruption to the legitimacy of knowledge production. Thus, conceptualizing PEC as a key cognitive factor influencing BI not only responds to the ethical realities introduced by AI technologies but also enhances the model’s capacity to capture complex psychological structures. Moreover, understanding ethical cognition is essential for ensuring the responsible integration of AIGC tools into education systems that aim to be sustainable and socially accountable.
Notably, the effect of ethical cognition on BI is not uniform across all user groups. Individuals may respond differently to the same ethical issue, and this variation often stems from differences in moral psychological traits. One such trait is MS—an individual’s ability to recognize and respond to moral cues in ethically charged situations—which serves as a critical moderating variable in the relationship between PEC and BI (Crowell et al., 2008). According to the Four-Component Model (Rest et al., 1999), MS constitutes the initial stage of activating moral judgment, moral motivation, and moral action; without the recognition of a moral issue in a given context, subsequent judgment processes are unlikely to be triggered. In user behavior studies, MS is often treated as a moderator that explains variations in behavioral responses to ethical cognition. Mower (2018) revealed that individuals with high MS are more inclined to adopt rejection or avoidance strategies when facing moral dilemmas Although originally applied in domains such as organizational behavior and medical ethics, MS—as a psychological trait at the individual level—has proven to be theoretically adaptable and empirically valid in AI ethics scenarios. This is particularly relevant in educational contexts, where both students and teachers may experience value conflicts triggered by the use of AIGC tools, and their levels of MS can significantly influence whether PEC translates into actual resistance to usage (Ashford, 2021).
3 Research model and methodology
3.1 Theoretical integration and model development
3.1.1 Theoretical foundations of the research model: TAM and PMT
Given that AIGC technologies enhance learning efficiency while simultaneously raising various ethical concerns, users’ adoption behaviors are influenced not only by their perceptions of tool effectiveness but also by factors such as perceived moral risks. Therefore, based on established theoretical foundations, this study integrates TAM and PMT to construct a more comprehensive and explanatory user adoption model, incorporating both the functional evaluation pathway and the risk defense pathway (Nikolic et al., 2024). TAM and PMT represent two distinct yet complementary cognitive pathways—function-driven and risk-driven, respectively. In the context of AIGC applications in education, where technological functionality intersects with ethical sensitivity, the integration of these two theories allows for a more holistic understanding of user adoption mechanisms and extends the theoretical scope of TAM. Accordingly, this study proposes a dual-pathway user behavior model and introduces two additional variables—PEC and MS—to enhance the model’s explanatory power under ethically sensitive conditions (Rhim et al., 2021).
3.1.2 Dual-pathway model construction and theoretical framework
In the functional cognition pathway, the model follows the classical structure of TAM, incorporating PEOU and PU as its core variables. PEOU not only positively influences PU but also, together with PU, positively predicts BI, thereby forming the main pathway logic of “ease of use—functional benefit—adoption intention” (Adjekum et al., 2024). In addition, PU exerts an indirect positive influence on CI, reflecting the extended effect of functional cognition on sustained usage behavior. In the ethical cognition pathway, the model introduces PEC as a key mediating variable to capture users’ subjective judgment regarding potential ethical risks associated with AIGC tools, such as moral conflict, ambiguous academic responsibility, and increased tool dependency. PEC negatively predicts BI, embodying the mechanism of “ethical concern—behavioral inhibition” (Sain and Lawal, 2024). To reveal the formation mechanism of PEC, the model incorporates four antecedent variables from PMT: PS and PV positively influence PEC, representing users’ threat appraisal of risk, while SE and RE negatively influence PEC, reflecting the regulatory function of individuals’ coping abilities (Su et al., 2022). Furthermore, these four PMT variables may also indirectly affect PU, indicating that risk cognition may interfere with functional evaluation and thereby indirectly influence BI (Rhim et al., 2021). To account for individual differences in ethical reactions, the model incorporates MS as a moderating variable to examine its effect on the relationship between PEC and BI. Specifically, individuals with high MS are more likely to experience ethical anxiety when faced with the same ethical scenarios, exhibiting a stronger tendency toward behavioral inhibition. In other words, MS amplifies the negative effect of PEC on BI (Ashford, 2021).
3.2 Hypotheses development
Based on the proposed structural model and theoretical logic, this section presents a series of research hypotheses focused on the causal relationships among the core variables, thereby providing a theoretical foundation for the subsequent empirical analysis.
3.2.1 Effects of PMT variables on PU
PU, a core construct within TAM, refers to users’ cognitive judgment that AIGC tools can improve their performance or efficiency in educational settings. However, this functional judgment is not isolated from risk assessment. Theoretically, the direct influence of PMT constructs on PU can be explained by the “cognitive verification cost” mechanism. When users perceive high severity or vulnerability (e.g., hallucinations or ethical pitfalls), they are compelled to allocate additional cognitive resources to verify and correct the AI output. This added effort diminishes the net efficiency gain, thereby directly reducing the tool’s perceived usefulness. According to PMT, users’ evaluation of whether a tool is useful is often influenced by their subjective perceptions of potential threats and their confidence in coping with them. First, when users perceive the use of AIGC tools as potentially resulting in serious negative consequences—such as diminished critical thinking or the erosion of students’ originality. Specifically, when their PS is high, their positive evaluation of the tool’s effectiveness may be suppressed, thereby reducing PU (González-Ponce et al., 2024).
H1: PS has a significant negative effect on PU.
Second, users believe they are personally vulnerable to such negative outcomes (i.e., high PV), they may develop doubts about the tool’s reliability and long-term value, leading to a decline in PU. Empirical studies have shown that higher levels of PV often result in defensive attitudes, which in turn inhibit positive evaluations of technology (Jelovčan et al., 2021).
H2: PV has a significant negative effect on PU.
Conversely, when users have strong confidence in their own abilities (i.e., high SE) and believe they can use AIGC tools correctly and safely, they are more likely to focus on the tools’ benefits, which facilitates a higher level of PU. SE has been identified as one of the PMT variables most strongly associated with adoption intention and high performance-related cognition in multiple studies (Hedayati et al., 2023).
H3: SE has a significant positive effect on PU.
Moreover, if users believe that proper management systems and guidance protocols are in place to effectively mitigate potential risks (i.e., high RE), their trust in the tool and perceived utility will likely increase. RE can effectively alleviate concerns about risk and shift user attention toward functional benefits (Courneya and Hellsten, 2001).
H4: RE has a significant positive effect on PU.
3.2.2 Effects of PMT variables on PEC
PEC reflects users’ subjective attention to the potential moral dilemmas that AIGC tools may provoke in educational settings, such as plagiarism, diminished student accountability, and blurred authorship. According to PMT, users’ subjective assessments of threat severity during the threat appraisal phase significantly influence their level of moral alertness. When users perceive the potential consequences of AIGC tools to be highly destructive (i.e., high PS) or believe they are personally more susceptible to ethical misuse (i.e., high PV), they tend to exhibit stronger moral vigilance, thereby intensifying their PEC (Ruan et al., 2020; Jannat et al., 2024).
H5: PS has a significant positive effect on PEC.
H6: PV has a significant positive effect on PEC.
Conversely, if users believe they are capable of identifying risks and using the technology responsibly (i.e., high SE) or trust that institutional policies and guidelines can effectively regulate misuse (i.e., high RE), their ethical concerns may be partially mitigated, thus reducing the level of PEC (Block and Keller, 1998; Sher et al., 2017; Al-Sharafi et al., 2021).
H7: SE has a significant negative effect on PEC.
H8: RE has a significant negative effect on PEC.
3.2.3 TAM pathway: effects of PEOU and PU on BI
According to TAM, users evaluate technology primarily based on PEOU and PU. Numerous empirical studies have shown that PEOU not only reduces cognitive and operational costs by simplifying the usage process but also significantly enhances users’ overall perception of a tool’s utility, thereby influencing their intention to adopt it. Specifically, when users perceive a system as easy to operate, their evaluation of its value tends to improve, leading to a heightened level of PU (Rahman, 2018).
H9: PEOU has a significant positive effect on PU.
Furthermore, PEOU can directly reduce usage barriers and psychological resistance, thereby enhancing BI. Prior research suggests that users’ perceptions of intuitive usability can boost their confidence and willingness to adopt a system, especially during initial encounters with the technology (Khosrow-Pour, 2003).
H10: PEOU has a significant positive effect on BI.
PU, a core construct of TAM, reflects users’ expectations regarding the benefits of using a technology. The positive relationship between PU and BI has been repeatedly validated in studies on AI technology adoption. The more users believe a given tool can improve their learning or work efficiency, the more likely they are to adopt it (Xu et al., 2022; Ali et al., 2024).
H11: PU has a significant positive effect on BI.
3.2.4 Ethical pathway: the effect of PEC on BI
While PU and PEOU can stimulate users’ intention to adopt a technology, in ethically sensitive contexts, the risk-related concerns evoked by PEC may counteract the positive effects of functional cognition. When users believe that the use of AIGC tools may violate educational fairness, undermine student autonomy, or generate issues related to academic responsibility, they may consciously avoid adopting the tool—even if its functional benefits are evident (Chiu and Kuo, 2007).
H12: PEC has a significant negative effect on BI.
3.2.5 Continuance pathway and the moderating role of MS
BI serves as a key antecedent to CI. Once users develop a clear intention to adopt a tool, they typically move toward forming habitual usage patterns, reflected in their continued use of the technology (Jeong et al., 2025). Empirical research across various digital platforms and AI application contexts has consistently confirmed the positive predictive relationship between BI and CI (Zhou et al., 2018; Liu et al., 2023).
H13: BI has a significant positive effect on CI.
In addition, users’ responses to ethical information vary significantly. MS, as a psychological trait, describes an individual’s capacity to recognize value conflicts in morally salient situations. When MS is high, users are more likely to perceive the severity of the issues represented by PEC, thereby amplifying the negative effect of PEC on BI. Research in moral education has demonstrated that individuals with high MS are more inclined to translate perceived moral conflicts into avoidance behavior (Strahovnik, 2018).
H14: MS positively moderates the relationship between PEC and BI; that is, the higher the MS, the stronger the negative effect of PEC on BI.
3.2.6 Proposed research model
In summary, based on the step-by-step development of the hypotheses above, this study constructs an integrated dual-pathway theoretical model that comprehensively considers both functional and ethical factors influencing the adoption of AIGC tools in educational contexts, as illustrated in Figure 1. This model will be empirically tested using Structural Equation Modeling (SEM) in subsequent analyses to evaluate its theoretical validity and explanatory power.
3.3 Empirical design and implementation strategy
3.3.1 Variable operationalization and instrument development
A structured questionnaire was developed to measure ten latent variables (i.e., PS, PV, SE, RE, PEOU, PU, PEC, MS, BI, CI), with items adapted from validated scales and refined for the educational AIGC context. All items were measured using a five-point Likert scale. The questionnaire included four sections: study introduction, demographic questions, measurement items (randomized to reduce bias), and closing remarks. Expert reviews and a pilot test (n = 30) confirmed the clarity, validity, and contextual suitability of the instrument (see Table 2).
3.3.2 Sampling strategy and survey implementation
This study used non-probability convenience sampling to recruit university students, teachers, and education professionals with prior AIGC experience. A screening question ensured eligibility. A power analysis was conducted using G*Power 3.1 to determine the minimum required sample size. To detect a medium effect size (f2 = 0.15) with a statistical power of 0.95 and a significance level of 0.05 (considering 10 predictors in the regression model), the calculated minimum sample size was 172. Our final sample of 589 participants significantly exceeds this requirement, ensuring adequate statistical power for data analysis. A pilot test (n = 30) refined item clarity and optimized completion time. The final questionnaire was hosted on Wenjuanxing and distributed through targeted academic networks to ensure relevance. Specifically, recruitment links were shared in university student course groups, faculty professional exchange groups, and educational technology forums on WeChat and QQ. Responses were collected anonymously and voluntarily. Before accessing the questionnaire, all participants were presented with the study’s purpose and privacy policy, and they provided digital informed consent by clicking an ‘I Agree’ button. Data quality was ensured through logical checks, mandatory items, and reverse-coded questions. After data collection, responses were cleaned and prepared in Excel and SPSS for analysis. The survey complied with ethical standards, ensuring anonymity and academic-only use of data.
3.3.3 Data analysis procedures and modeling approach
This study adopted SEM using SPSS and AMOS to assess measurement quality and test hypotheses. The process included: (1) data screening (checking for missing values and outliers) and descriptive statistics; (2) reliability and validity testing via Cronbach’s α, Composite Reliability (CR), and Average Variance Extracted (AVE); (3) Confirmatory Factor Analysis (CFA) to assess model fit using indices such as CMIN/DF, CFI, TLI, RMSEA, and SRMR. After validating the measurement model, structural path analysis was conducted. The moderating effect of MS was tested through multi-group SEM and interaction-term regression. Discriminant validity was confirmed by comparing AVE with squared inter-construct correlations. Model robustness was ensured via multiple fit indices to avoid overfitting and confirm theoretical coherence.
4 Results
A total of 620 questionnaires were distributed, with 589 valid responses retained after screening, resulting in a 95.0% effective response rate. The sample showed a near-equal gender distribution (52% male, 48% female), with most respondents aged 19–35 (64%). A combined 72% held bachelor’s or master’s degrees, and 25% held doctoral degrees. Students comprised 61% of the sample, while teachers accounted for 21%. All participants had prior AIGC experience, primarily for translation and editing (28%) and writing support (26%) (see Table 3).
Reliability analysis was conducted to evaluate the internal consistency of questionnaire items, indicating how well they measure the same construct. As shown in Table 4, all ten constructs in the questionnaire exceeded Cronbach’s alpha values above the 0.70 threshold, confirming satisfactory internal consistency. These results confirm that the measurement scales used in this study are reliable.
Validity testing in this study examined both content and construct validity. Content validity was ensured by adapting measurement items from established literature and refining them through preliminary analysis. Construct validity was assessed via CFA, with all model fit indices meeting recommended thresholds (e.g., CMIN/DF = 1.086, GFI = 0.951, RMSEA = 0.012, CFI = 0.995), indicating a good model fit (see Table 5 and Figure 2). These results confirm the robustness of the measurement model. The following section examines convergent and discriminant validity.
Convergent validity was evaluated using AVE and CR. As shown in Table 6, all constructs met the recommended thresholds (AVE > 0.50, CR > 0.70). Furthermore, standardized factor loadings exceeded 0.70 and were statistically significant (p < 0.001). These results confirm the strong convergent validity and internal consistency of the measurement model.
Discriminant validity was confirmed by comparing the square roots of the AVE with inter-construct correlations (Fornell-Larcker criterion). As shown in Table 7, the square roots of the AVE for each construct were greater than their correlations with other constructs, indicating that the measurement model demonstrates good discriminant validity.
This study employed SEM to test the proposed hypotheses. As shown in Figure 3, SEM enabled the analysis of complex causal relationships among latent variables. Model fit was assessed using standard indices to ensure empirical adequacy and theoretical consistency.
As shown in Table 8, all model fit indices met recommended thresholds, indicating a well-fitting structural model. Absolute fit indices (CMIN/DF = 1.297, GFI = 0.948, AGFI = 0.937, RMSEA = 0.022) and incremental fit indices (NFI = 0.935, IFI = 0.984, TLI = 0.982, CFI = 0.984) confirmed a good model fit. Parsimonious indices (PNFI = 0.821, PCFI = 0.864) also exceeded the 0.50 standard. Collectively, these results demonstrate the robust fit of the structural model (see Table 9).
Structural paths were estimated using Maximum Likelihood Estimation (MLE). Critical ratios exceeded ±1.96, indicating that the results were statistically significant (p < 0.05). Specifically, PEOU (β = 0.201), SE (β = 0.180), and RE (β = 0.149) positively influenced PU, while PS (β = −0.108) and PV (β = −0.132) had negative effects. Regarding PEC, PS (β = 0.134) and PV (β = 0.238) exerted positive effects, whereas SE (β = −0.122) and RE (β = −0.116) showed negative effects. Furthermore, both PEOU (β = 0.231) and PU (β = 0.349) positively predicted BI, while PEC (β = −0.145) negatively impacted BI. Finally, BI had a strong positive effect on CI (β = 0.580). These findings confirm that all hypothesized paths are statistically significant. However, it is important to note that the path coefficients for H1 (PS - > PU, β = −0.108) and H8 (RE - > PEC, β = −0.116) are relatively weak compared to other structural paths. This suggests that while risk perception factors do influence functional and ethical evaluations, their impact is less dominant than functional drivers like PEOU and PU.
To assess the moderating role of MS, hierarchical regression was conducted using SPSS 26.0. After controlling for demographic variables and AIGC usage patterns, PEC and MS were mean-centered, and an interaction term (PEC × MS) was added. Crucially, as shown in Table 10, the control variable ‘Occupation’ did not show a statistically significant effect on BI (p > 0.05). This lack of significant heterogeneity among students, teachers, and platform employees supports the validity of pooling these subgroups for the analysis. Subsequently, the interaction term had a significant negative effect on BI (β = −0.121, p < 0.001), confirming that MS moderates the PEC–BI relationship. Specifically, higher levels of MS amplify the negative impact of PEC on behavioral intention. The R2 value for this regression model was 0.095. It should be noted that this value reflects the variance explained specifically by the interaction analysis setup, rather than the full predictive power of the comprehensive structural model. The primary purpose of this regression was to test the significance of the moderating effect of MS, which was confirmed (p < 0.001), rather than to maximize variance explanation.
Further simple slope analysis (Figure 4) showed that the negative effect of PEC on BI was more pronounced at high levels of MS, and weaker at low levels. This confirms that MS amplifies the inhibitory impact of ethical concerns on behavioral intention, supporting the proposed moderation hypothesis.
5 Discussion
5.1 Summary of key findings
Building on TAM and PMT, this study developed a dual-pathway adoption model that incorporates two critical variables: PEC and MS. The model systematically investigates users’ adoption mechanisms of AIGC tools within educational settings. Structural equation modeling was used to test 14 hypothesized paths, all of which were found to be statistically significant, indicating a robust model fit and theoretical coherence. In the functional cognition pathway, PEOU had a significant positive effect on both PU and BI (supporting H9 and H10), while PU also significantly enhanced BI and further influenced CI (supporting H13). These findings confirm the applicability of the core TAM framework in the context of AIGC use in education.
In the risk cognition pathway, PS and PV negatively influenced PU but positively affected PEC. Conversely, SE and RE positively influenced PU while negatively affecting PEC. These results support the logic of the PMT framework by showing that both threat and coping appraisals shape users’ perceptions of AIGC tools’ functionality and ethical risks. Within the ethical pathway, PEC significantly and negatively impacted BI, indicating that ethical concerns can suppress users’ willingness to adopt AIGC tools in educational contexts. Moreover, MS significantly moderated the relationship between PEC and BI, with individuals high in MS being more likely to amplify the negative impact of ethical concerns on adoption behavior. Overall, the dual-pathway model successfully integrates functional, risk, and ethical cognitive dimensions, uncovering the multifaceted drivers of user behavior in educational AIGC scenarios. The findings also validate the explanatory power of MS as a key psychological trait, offering both theoretical and methodological contributions to future research on AI in education.
5.2 Discussion of functional and risk pathways
Among the positive predictors of PU, PEOU and RE showed the most significant effects. Users who perceive AIGC tools as intuitive, easy to operate, and user-friendly are more likely to positively evaluate their usefulness—this finding aligns with the core logic of TAM as proposed by Davis (1989) (supporting H9). Notably, however, RE also had a strong and statistically significant impact on PU, with an effect size nearly comparable to PEOU (supporting H4). This suggests that in educational contexts, users’ trust in institutional safeguards plays a critical role in their assessment of a tool’s value. In the domain of health technology adoption, prior studies have found that users’ perception of clearly defined rules and governance mechanisms significantly enhances both perceived utility and behavioral intention (Zhang et al., 2017). Similarly, in AI-driven education, higher RE is associated with stronger PU. Particularly in education—a domain characterized by high responsibility—the need for institutional clarity becomes more pronounced, amplifying the role of RE in shaping perceptions (Osman and Yatam, 2024). By contrast, although SE had a statistically significant positive effect on PU (supporting H3), its influence was relatively weaker. This may reflect users’ perception of AIGC tools as “functionally rich but logically complex”—even those with adequate experience may still lack confidence in managing output quality (Masry Herzallah and Makaldy, 2025). This finding echoes Kasneci et al. (2023) argument that although AIGC can enhance efficiency in education, excessive reliance on system logic may reduce users’ perceived control and agency.
Regarding negative influences on PU, the suppressive effects of PS and PV are statistically significant but relatively weak (supporting H1 and H2). This suggests that while users recognize the potential severe consequences of AIGC usage (e.g., hallucinations or bias), these risks do not substantially diminish their perception of the tool’s utility. This ‘utility-over-risk’ calculus implies that in educational settings, the demand for efficiency often outweighs concerns about potential severity. These results support PMT assumption that threat perception weakens positive evaluations (Ruan et al., 2020), and further indicate that users’ risk cognition has deeply permeated their value judgments of AIGC tools. This diverges from the traditional TAM assumption that PU is generally unaffected by negative variables, and can be explained by the contextual specificity of education: AIGC tools are directly linked to critical issues such as student assignments, fair evaluation, and content originality. When users recognize the potential risks—such as encouraging academic laziness or blurring responsibility—they may downgrade their value assessments even if they acknowledge the tool’s efficiency. This aligns with Kelly et al.’s (2023) findings that educational users often hold a dual perception of AIGC as “useful but potentially harmful,” especially in high-stakes scenarios such as examinations and grading, where PU is easily disrupted by ethical evaluations. In addition, the study found that RE had a significant negative effect on PEC. However, this effect was relatively modest (H8: β = −0.116). This indicates that simply believing in the effectiveness of external regulations or policy safeguards is not enough to fully eliminate users’ ethical anxieties. Since AIGC ethics involve complex value judgments, institutional responses alone may have a limited capacity to soothe users’ subjective ethical concerns compared to internal factors. RE not only significantly reduces PEC (Al-Sharafi et al., 2021) but also contributes positively to the formation of PU (Zhang et al., 2017). This mechanism can be attributed to RE’s function in building psychological safety: when users believe that there are clear rules and institutional protections, they are more likely to downplay ethical risks and elevate utility evaluations. This finding is consistent (Sain and Lawal, 2024), which shows that in ethically sensitive contexts, trust mechanisms at the organizational or platform level often surpass individual self-efficacy in driving technology acceptance.
Regarding the formation of ethical concerns, the analysis confirmed that threat appraisals (i.e., PS and PV) significantly heightened PEC (supporting H5 and H6). Theoretically, this suggests that risk perception acts as a cognitive trigger for ‘moral vigilance.’ When users believe that AIGC misuse leads to severe consequences (e.g., academic dishonesty) or feel personally susceptible to these risks, their psychological defense mechanisms are activated, manifesting as heightened ethical anxiety. Conversely, coping appraisals (i.e., SE and RE) acted as protective factors, significantly mitigating PEC (supporting H7 and H8). This indicates that a ‘sense of control’ acts as a buffer against ethical distress. When users feel competent in managing the tool SE or trust that external regulations are effective RE, they perceive the ethical risks as manageable rather than overwhelming, thereby lowering their overall level of concern.
Taken together, this study identifies PU as a critical intersection between the TAM and PMT pathways, reflecting a dual dynamic—being suppressed by risk variables while simultaneously promoted by structural trust. This structural tension suggests that in the adoption of educational AI tools, the cognitive weight of PU is shaped not only by perceptions of tool performance but also by users’ subjective assessments of whether the associated risks are manageable. Functional design alone may not be sufficient to drive adoption intentions; rather, psychological assurance must be built through institutional safeguards, user training, and ethical education to establish both a sense of control and trust (Masry Herzallah and Makaldy, 2025).
5.3 Ethical mechanisms and moderating effects (PEC and MS)
In this study, PEC emerged as a significant negative predictor of BI. Although its path coefficient (β = −0.145) is smaller than that of functional drivers such as PU (β = 0.349) and PEOU (β = 0.231), its theoretical significance implies that ethical concerns act as a distinct psychological barrier (supporting H12). This finding indicates that, in educational contexts, users’ adoption of AIGC tools is not solely driven by functional expectations, but is highly sensitive to ethical implications. Especially in high-risk academic scenarios such as coursework, academic writing, and examinations, users’ awareness of issues like “lack of originality” or “ambiguous responsibility” may directly suppress their intention to use these tools. This result aligns with Shin’s (2021) assertion that “AI-related ethical concerns can directly undermine user trust, thereby influencing decision-making,” and resonates with findings by Zhou et al. (2018) that “AI usage in educational settings is more constrained by moral norms and expectations.
From a path coefficient perspective, the effect of PEC on BI was notably stronger than that of PS, PV, SE, and RE—indicating that users’ risk cognition regarding the likelihood and severity of consequences often requires mediation through PEC to impact behavioral outcomes. In other words, PMT variables serve as upstream cognitive factors influencing PEC, rather than directly predicting behavioral intention. This structural relationship reinforces PEC’s theoretical role as a mediator in ethical cognition and illustrates that educational users are more concerned with whether the technology is “morally appropriate” rather than merely “risky.” A similar structure was confirmed by Jannat et al. (2024), who found that PMT variables influence behavior primarily through ethical anxiety.
In addition, the moderating effect of MS on the PEC → BI relationship was also empirically validated. Specifically, when MS was high, the negative effect of PEC on BI was significantly amplified; when MS was low, the effect was attenuated (supporting H14). This suggests that individuals respond differently to the same ethical issue, depending on their capacity to perceive value conflicts. High-MS individuals tend to internalize ethical concerns more strongly, translating them into avoidance behaviors (Strahovnik, 2018). This result is consistent with Rest et al.’s (1999) moral development theory, which posits moral sensitivity as a prerequisite for moral judgment, and echoes Vance et al.’s (2012) empirical findings that individuals with higher MS are more prone to triggering behavioral defense mechanisms. Notably, compared with general cognitive variables in TAM and PMT, MS represents a relatively stable psychological trait, making its moderating effect more context-independent. In education—where norms, fairness, and responsibility are highly emphasized—such trait-level effects are particularly salient. For instance, teachers or postgraduate students typically exhibit a stronger awareness of academic norms, and thus tend to score higher on MS than undergraduate students. This difference may help explain the observed variation in the PEC → BI inhibitory path across user subgroups. Taken together, PEC not only functions as an outcome of risk perception but also serves as a crucial bridge between moral judgment and behavioral response. Meanwhile, MS operates as a psychological threshold at the individual level—only when users possess sufficient ethical awareness can the moral conflicts represented by PEC translate into behavioral inhibition (Crowell et al., 2008). This finding further consolidates the moderating role of MS within the ethical pathway and adds depth to existing AI ethics adoption models.
Furthermore, our findings regarding the interplay between functional value and ethical concerns resonate with the broader discourse on AI-driven educational transformation. As noted by Naidoo (2023) and Rughiniș et al. (2025), the sustainable integration of AI requires balancing technological advancement with human-centric ethical considerations. Our model provides empirical evidence for this balance by demonstrating that adoption is not a linear function of utility alone. Rather, it is the result of a dynamic trade-off: while PU drives the ‘engine’ of adoption, PEC acts as a critical ‘brake’—particularly for morally sensitive users. Therefore, achieving the sustainable transformation envisioned in recent scholarship depends on establishing a ‘function-ethics equilibrium,’ where efficiency gains are matched by equally robust ethical safeguards RE to foster long-term trust.
5.4 Practical implications
The findings of this study offer concrete guidelines for educational AIGC stakeholders. First, for platform designers, the significant impact of RE on lowering PEC suggests that “visible ethical guardrails” are essential. Since users’ trust relies heavily on external safeguards, designers should embed features such as real-time plagiarism checks, clear data usage transparency badges, and one-click citation generation directly into the interface. These design cues can psychologically reassure users that the system is governed by safe protocols, thereby mitigating ethical anxiety.
Second, for educators and policymakers, the moderating role of MS necessitates a differentiated approach to guidance. Our results show that users with high MS are more prone to avoiding AIGC due to ethical fears. For this group, institutions should provide clear “safe-use lists” and definitive integrity policies to alleviate their uncertainty. Conversely, for users with low MS who may not naturally perceive ethical risks, educational programs should focus on “ethical awakening”—using case studies of AI misuse to heighten their sensitivity and prevent reckless adoption.
Third, regarding the link between risk perception and usefulness, developers must prioritize “explainability” to lower cognitive costs. Since high perceived severity reduces utility by forcing users to verify outputs, future tools should provide confidence scores or source attribution. This would reduce the ‘cognitive verification cost,’ thereby enhancing both perceived usefulness and adoption intention.
5.5 Limitations and directions for future research
Despite validating a dual-path model of AIGC adoption in education, this study has several limitations. First, the sample pooled university students, teachers, and platform employees. Although our regression analysis indicated no significant impact of occupation on behavioral intention, the sample size of specific subgroups was insufficient to conduct a rigorous Multi-Group Analysis (MGA) in SEM. Different stakeholders may indeed possess distinct ethical concerns. Future research should aim for larger, balanced sample sizes to systematically compare group differences and include broader user groups like K–12 teachers or corporate professionals. Second, the cross-sectional design prevents analysis of behavioral change over time; longitudinal or experimental methods are recommended. Third, the model emphasizes individual cognition, overlooking macro-level influences such as social norms or institutional policies. Lastly, although reliability and validity were confirmed, construct applicability across cultural settings requires further validation. Future studies should explore broader contexts and adopt mixed methods to enrich theoretical and empirical contributions. Furthermore, methodological advancements in generative AI offer new avenues for instrument development. As noted in recent scholarship, Large Language Models (LLMs) show promise in automating the generation and cross-cultural adaptation of psychometric items (Grobelny et al., 2025; Marmolejo-Ramos et al., 2025). Future researchers could leverage these tools to further refine the validity and efficiency of scales used in educational technology acceptance.
6 Conclusion
This study developed a dual-pathway model integrating TAM and PMT to examine the adoption of AIGC tools in education, incorporating PEC and MS as key variables. Results from 589 valid responses confirmed that both perceived usefulness and perceived ease of use significantly promote behavioral and continuance intentions. Meanwhile, threat and coping appraisals indirectly shape user behavior via functional and ethical evaluations. PEC exerted a strong negative effect on adoption, an effect significantly moderated by MS—indicating that ethical concerns and individual sensitivity are critical factors in decision-making. Theoretically, this study extends TAM by integrating ethical cognition into sustainable adoption models. Practically, it offers actionable insights for improving AIGC platforms through enhanced usability, risk mitigation, and user-specific ethical strategies. These findings support the development of ethical, inclusive, and resilient educational technologies. Future research should explore longitudinal trends and cultural diversity to further enhance the sustainable and responsible integration of AIGC in education.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
TY: Writing – original draft, Formal analysis, Methodology, Data curation, Conceptualization, Writing – review & editing. YT: Investigation, Writing – original draft, Software, Methodology. QH: Resources, Writing – original draft, Validation, Formal analysis. ZC: Software, Writing – original draft, Visualization. RZ: Writing – review & editing, Funding acquisition, Supervision, Project administration.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Acknowledgments
The authors thank all the participants in this study for their time and willingness to share their experiences and feelings.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Abbas, M., Jam, F. A., and Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. Int. J. Educ. Technol. High. Educ. 21:10. doi: 10.1186/s41239-024-00444-7
Adjekum, D., Waller, Z., and Keller, J. (2024). An evaluation of artificial intelligence chatbots ethical use, attitudes towards technology, behavioral factors and student learning outcomes in collegiate aviation programs. Coll. Aviat. Rev. Int. 42, 84–118. doi: 10.22488/okstate.24.100239
Ajibade, P. (2018). Technology acceptance model limitations and criticisms: exploring the practical applications and use in technology-related studies, mixed-method, and qualitative researches. Libr. Philos. Pract. 9, 1–13. Available at: https://digitalcommons.unl.edu/libphilprac/1941/
Ali, I., Warraich, N. F., and Butt, K. (2024). Acceptance and use of artificial intelligence and AI-based applications in education: a meta-analysis and future direction. Inf. Dev. 41, 859–874. doi: 10.1177/02666669241257206,
Al-Sharafi, M. A., Al-Qaysi, N., Iahad, N. A., and Al-Emran, M. (2021). Evaluating the sustainable use of mobile payment contactless technologies within and beyond the COVID-19 pandemic using a hybrid SEM-ANN approach. Int. J. Bank Mark. 40, 1071–1095. doi: 10.1108/IJBM-07-2021-0291
Ardito, C. G. (2025). Generative AI detection in higher education assessments. New Dir. Teach. Learn. 2025, 11–28. doi: 10.1002/tl.20624
Ashford, T. (2021). App-centric students and academic integrity: a proposal for assembling socio-technical responsibility. J. Acad. Ethics 19, 35–48. doi: 10.1007/s10805-020-09387-w
Ateeq, A., Alzoraiki, M., Milhem, M., and Ateeq, R. A. (2024). Artificial intelligence in education: implications for academic integrity and the shift toward holistic assessment. Front. Educ. 9:1470979. doi: 10.3389/feduc.2024.1470979
Ateş, H., and Gündüzalp, C. (2025). The convergence of GETAMEL and protection motivation theory: a study on augmented reality-based gamification adoption among science teachers. Educ. Inf. Technol. 30. doi: 10.1007/s10639-025-13480-1
Azeem, S., and Abbas, M. (2025). Personality correlates of academic use of generative artificial intelligence and its outcomes: does fairness matter? Educ. Inf. Technol. 30. doi: 10.1007/s10639-025-13489-6
Block, L. G., and Keller, P. A. (1998). Beyond protection motivation: an integrative theory of health appeals. J. Appl. Soc. Psychol. 28, 1584–1608. doi: 10.1111/j.1559-1816.1998.tb01691.x
Boerman, S. C., Kruikemeier, S., and Zuiderveen Borgesius, F. J. (2021). Exploring motivations for online privacy protection behavior: insights from panel data. Commun. Res. 48, 953–977. doi: 10.1177/0093650218800915
Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ. 20:38. doi: 10.1186/s41239-023-00408-3
Chen, X., Hu, Z., and Wang, C. (2024). Empowering education development through AIGC: a systematic literature review. Educ. Inf. Technol. 29, 17485–17537. doi: 10.1007/s10639-024-12549-7
Chenoweth, T., Minch, R., and Gattiker, T. 2009. Application of protection motivation theory to adoption of protective technologies. in 2009 42nd Hawaii international conference on system sciences, 1–10.
Chiu, C.-K., and Kuo, C. 2007. Understanding behavioral intention in IT ethics: an educational perspective. Available online at: https://www.semanticscholar.org/paper/Understanding-Behavioral-Intention-in-IT-Ethics%3A-An-Chiu-Kuo/77c3d9fd9c56590a87c64b1bc6cb184c3ae17811?utm_source=consensus (Accessed May 5, 2025).
Cotton, D. R. E., Cotton, P. A., and Shipway, J. R. (2024). Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 61, 228–239. doi: 10.1080/14703297.2023.2190148
Courneya, K. S., and Hellsten, L.-A. M. (2001). Cancer prevention as a source of exercise motivation: an experimental test using protection motivation theory. Psychol. Health Med. 6, 59–64. doi: 10.1080/13548500125267
Crompton, H., and Burke, D. (2024). The educational affordances and challenges of ChatGPT: state of the field. TechTrends 68, 380–392. doi: 10.1007/s11528-024-00939-0
Crowell, C. R., Narvaez, D., and Gomberg, A. (2008). “Moral psychology and information ethics: psychological distance and the components of moral behavior in a digital world” in eds. R. Luppicini and R. Adell. Information security and ethics: concepts, methodologies, tools, and applications (Hershey, PA: IGI Global Scientific Publishing), 3269–3281. doi: 10.4018/978-1-60566-022-6.ch045
Dai, Z. (2024). Does AI Help? A Review of How AIGC Affects Design Education. 2:5.doi: 10.61173/pdymj625
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–340. doi: 10.2307/249008
Dodge, C. E., Fisk, N., Burruss, G. W., Moule, R. K., and Jaynes, C. M. (2023). What motivates users to adopt cybersecurity practices? A survey experiment assessing protection motivation theory. Criminol. Public Policy 22, 849–868. doi: 10.1111/1745-9133.12641
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., et al. (2021). Artificial intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 57:101994. doi: 10.1016/j.ijinfomgt.2019.08.002,
ElKheshin, S., and Saleeb, N. (2020). Assessing the adoption of e-government using TAM model: case of Egypt. Int. J. Manag. Inf. Technol. 12, 1–14. doi: 10.5121/ijmit.2020.12101
Floridi, L., and Chiriatti, M. (2020). GPT-3: its nature, scope, limits, and consequences. Minds Mach. 30, 681–694. doi: 10.1007/s11023-020-09548-1
Floyd, D. L., Prentice-Dunn, S., and Rogers, R. W. (2000). A meta-analysis of research on protection motivation theory. J. Appl. Soc. Psychol. 30, 407–429. doi: 10.1111/j.1559-1816.2000.tb02323.x
Ghimire, A., and Edwards, J. (2024). Generative AI adoption in classroom in context of technology acceptance model (TAM) and the innovation diffusion theory (IDT). arXiv. doi: 10.48550/arXiv.2406.15360
González-Ponce, B. M., Carmona-Márquez, J., Pilatti, A., Díaz-Batanero, C., and Fernández-Calderón, F. (2024). The protection motivation theory as an explanatory model for intention to use alcohol protective behavioral strategies related to the manner of drinking among young adults. Alcohol Alcohol. 59:agae059. doi: 10.1093/alcalc/agae059,
Grobelny, J., Szymański, K., and Strozyk, Z. (2025). Act as an expert in psychometry. The evaluation of large language models utility in psychological tests cross-cultural adaptations. Acta Psychol. 261:105813. doi: 10.1016/j.actpsy.2025.105813
Guo, J., Ma, Y., Li, T., Noetel, M., Liao, K., and Greiff, S. (2024). Harnessing artificial intelligence in generative content for enhancing motivation in learning. Learn. Individ. Differ. 116:102547. doi: 10.1016/j.lindif.2024.102547
Hedayati, S., Damghanian, H., Farhadinejad, M., and Rastgar, A. A. (2023). Meta-analysis on application of protection motivation theory in preventive behaviors against COVID-19. Int. J. Disaster Risk Reduct. 94:103758. doi: 10.1016/j.ijdrr.2023.103758,
Hess, T. J., McNab, A. L., and Basoglu, K. A. (2014). Reliability generalization of perceived ease of use, perceived usefulness, and behavioral intentions1. MIS Q. 38, 1–28. doi: 10.25300/MISQ/2014/38.1.01
Hsiao, C.-H., and Tang, K.-Y. (2024). Beyond acceptance: an empirical investigation of technological, ethical, social, and individual determinants of GenAI-supported learning in higher education. Educ. Inf. Technol. 30, 10725–10750. doi: 10.1007/s10639-024-13263-0,
Hsu, W.-L., and Silalahi, A. D. K. (2024). Exploring the paradoxical use of ChatGPT in education: analyzing benefits, risks, and coping strategies through integrated UTAUT and PMT theories using a hybrid approach of SEM and fsQCA. Comput. Educ.: Artif. Intell. 7:100329. doi: 10.1016/j.caeai.2024.100329
Huang, S., and Wu, S. (2024). Innovative applications of AIGC in television content generation. Trans. Soc. Sci. Educ. Humanit. Res. 9, 247–252. doi: 10.62051/ycajyq72
Imran, M., and Almusharraf, N. (2023). Analyzing the role of ChatGPT as a writing assistant at higher education level: a systematic review of the literature. Contemp. Educ. Technol. 15:ep464. doi: 10.30935/cedtech/13605
Islam, A. K. M. N., Azad, N., Mäntymäki, M., and Islam, S. M. S. (2014). “TAM and E-learning adoption: a philosophical scrutiny of TAM, its limitations, and prescriptions for E-learning adoption research” in Digital services and information intelligence. eds. H. Li, M. Mäntymäki, and X. Zhang (Berlin, Heidelberg: Springer), 164–175.
Jannat, T., Arefin, S., Hosen, M., Omar, N. A., Al Mamun, A., and Hoque, M. E. (2024). Unlocking the link: protection motivation intention in ethics programs and unethical workplace behavior. Asian J. Bus. Ethics 13, 461–488. doi: 10.1007/s13520-024-00218-4
Jelovčan, L., Vrhovec, S., and Mihelič, A. (2021). Survey about cyberattack protection motivation in higher education: academics at Slovenian universities, 2017. arXiv. doi: 10.48550/arXiv.2109.04132
Jeong, S., Kim, S., and Lee, S. (2025). “Effects of perceived ease of use and perceived usefulness of technology acceptance model on intention to continue using generative AI: focusing on the mediating effect of satisfaction and moderating effect of innovation resistance” in Advances in conceptual modeling. eds. M. Saeki, L. Wong, J. Araujo, C. Ayora, A. Bernasconi, and M. Buffa, et al. (Cham: Springer Nature Switzerland), 99–106.
Jose, B., Cherian, J., Verghis, A. M., Varghise, S. M. S. M., and Joseph, S. (2025). The cognitive paradox of AI in education: between enhancement and erosion. Front. Psychol. 16:1550621. doi: 10.3389/fpsyg.2025.1550621,
Jürgensmeier, L., and Skiera, B. (2024). Generative AI for scalable feedback to multimodal exercises. Int. J. Res. Mark. 41, 468–488. doi: 10.1016/j.ijresmar.2024.05.005
Karran, A. J., Charland, P., Martineau, J.-T., Arana, A. O. de G. L.de, Lesage, A., Senecal, S., et al. 2024). Multi-stakeholder perspective on responsible artificial intelligence and acceptability in education. arXiv doi: 10.48550/arXiv.2402.15027
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 103:102274. doi: 10.1016/j.lindif.2023.102274
Kelly, S., Kaye, S.-A., and Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telemat. Inform. 77:101925. doi: 10.1016/j.tele.2022.101925
Khosrow-Pour, M. (Ed.). (2003). Information technology and organizations: trends, issues, challenges and solutions. Hershey, PA: Idea Group Inc (IGI). Available at: https://books.google.com.cy/books?id=RGXEoPkZVacC&printsec=copyright#v=onepage&q&f=false
Kwon, O., Bae, S., and Shin, B. (2020). Understanding the adoption intention of AI through the ethics lens. Hawaii Int. Conf. Syst. Sci. 2020 (HICSS-53). Available online at: https://aisel.aisnet.org/hicss-53/ks/aspects_of_ai/3 (Accessed May 4, 2025).
Li, K. (2023). Determinants of college students’ actual use of AI-based systems: an extension of the technology acceptance model. Sustainability 15:5221. doi: 10.3390/su15065221
Li, R. (2024). Research on blended teaching model in finance and economics education using AIGC technology. Commun. Educ. Rev. 5. doi: 10.37420/j.cer.2024.029
Li, M., Xie, Q., Enkhtur, A., Meng, S., Chen, L., Yamamoto, B. A., et al. (2025). A framework for developing university policies on generative AI governance: A cross-national comparative study. arXiv. doi: 10.48550/arXiv.2504.02636
Liu, Y., Li, Q., Edu, T., and Negricea, I. C. (2023). Exploring the continuance usage intention of travel applications in the case of Chinese tourists. J. Hosp. Tour. Res. 47, 6–32. doi: 10.1177/1096348020962553
Liu, H., Wang, Y., Fan, W., Liu, X., Li, Y., Jain, S., et al. (2021). Trustworthy AI: a computational perspective. arXiv. doi: 10.48550/arXiv.2107.06641
Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 13:410. doi: 10.3390/educsci13040410
Lu, H., He, L., Yu, H., Pan, T., and Fu, K. (2024). A study on teachers’ willingness to use generative AI technology and its influencing factors: based on an integrated model. Sustainability 16:7216. doi: 10.3390/su16167216
Mahapatra, S. (2024). Impact of ChatGPT on ESL students’ academic writing skills: a mixed methods intervention study. Smart Learn. Environ. 11:9. doi: 10.1186/s40561-024-00295-9
Marmolejo-Ramos, F., Bulut, O., Anunciaçáo, L., Marques, L., Barthakur, A., Kundrat, J., et al. (2025). From human artefact to machine output: automating the “art” of psychological measurement. J. Psychol. AI 1:2561692. doi: 10.1080/29974100.2025.2561692
Masry Herzallah, A., and Makaldy, R. (2025). Technological self-efficacy and sense of coherence: key drivers in teachers’ AI acceptance and adoption. Comput. Educ. Artif. Intell. 8:100377. doi: 10.1016/j.caeai.2025.100377
McIntire, A., Calvert, I., and Ashcraft, J. (2024). Pressure to plagiarize and the choice to cheat: toward a pragmatic reframing of the ethics of academic integrity. Educ. Sci. 14:244. doi: 10.3390/educsci14030244
Mondal, H., and Mondal, S. (2023). ChatGPT in academic writing: maximizing its benefits and minimizing the risks. Indian J. Ophthalmol. 71, 3600–3606. doi: 10.4103/ijo.ijo_718_23,
Morales-García, W. C., Sairitupa-Sanchez, L. Z., Morales-García, S. B., and Morales-García, M. (2024). Adaptation and psychometric properties of a brief version of the general self-efficacy scale for use with artificial intelligence (GSE-6AI) among university students. Front. Educ. 9:1293437. doi: 10.3389/feduc.2024.1293437
Mower, D. S. (2018). “Increasing the moral sensitivity of professionals” in Ethics across the curriculum—pedagogical perspectives. eds. E. E. Englehardt and M. S. Pritchard (Cham: Springer International Publishing), 73–88.
Naidoo, D. T. (2023). Integrating TAM and IS success model: exploring the role of blockchain and AI in predicting learner engagement and performance in e-learning. Front. Comput. Sci. 5:1227749. doi: 10.3389/fcomp.2023.1227749
Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., and Nguyen, B.-P. T. (2023). Ethical principles for artificial intelligence in education. Educ. Inf. Technol. 28, 4221–4241. doi: 10.1007/s10639-022-11316-w,
Nguyen, H. T., and Tang, C. W. (2022). Students’ intention to take e-learning courses during the COVID-19 pandemic: a protection motivation theory perspective. Int. Rev. Res. Open Distrib. Learn. 23, 21–42. doi: 10.19173/irrodl.v23i3.6178
Nikolic, S., Wentworth, I., Sheridan, L., Moss, S., Duursma, E., Jones, R. A., et al. (2024). A systematic literature review of attitudes, intentions and behaviours of teaching academics pertaining to AI and generative AI (GenAI) in higher education: an analysis of GenAI adoption using the UTAUT framework. Australas. J. Educ. Technol. 40, 56–75. doi: 10.14742/ajet.9643
Ocen, S., Elasu, J., Aarakit, S. M., and Olupot, C. (2025). Artificial intelligence in higher education institutions: review of innovations, opportunities and challenges. Front. Educ. 10:1530247. doi: 10.3389/feduc.2025.1530247
Osman, Z., and Yatam, M. (2024). Enhancing artificial intelligence-enabled transformation acceptance among employees of higher education institutions. Int. J. Acad. Res. Account. Finance Manag. Sci. 14, 289–303. doi: 10.6007/IJARAFMS/v14-i2/21322,
Park, J., Yun, J., and Chang, W. (2024). Intention to adopt services by AI avatar: a protection motivation theory perspective. J. Retail. Consum. Serv. 80:103929. doi: 10.1016/j.jretconser.2024.103929
Plata, S., De Guzman, M. A., and Quesada, A. (2023). Emerging research and policy themes on academic integrity in the age of chat GPT and generative AI/sterling plata, maria ana de guzman and arthea quesada. Asian J. Univ. Educ. 19, 743–758. doi: 10.24191/ajue.v19i4.24697
Rahman, N. (2018). Toward understanding PU and PEOU of technology acceptance model. Student Research Symposium. Available online at: https://pdxscholar.library.pdx.edu/studentsymposium/2018/Presentations/3 (Accessed May 5, 2025).
Ravšelj, D., Keržič, D., Tomaževič, N., Umek, L., Brezovar, N., Iahad, N. A., et al. (2025). Higher education students’ perceptions of ChatGPT: a global study of early reactions. PLoS One 20:e0315011. doi: 10.1371/journal.pone.0315011,
Rest, J. R., Narvaez, D., Thoma, S. J., and Bebeau, M. J. (1999). DIT2: devising and testing a revised instrument of moral judgment. J. Educ. Psychol. 91, 644–659. doi: 10.1037/0022-0663.91.4.644
Reynolds, S. J. (2008). Moral attentiveness: who pays attention to the moral aspects of life? J. Appl. Psychol. 93, 1027–1041. doi: 10.1037/0021-9010.93.5.1027,
Rhim, J., Lee, J.-H., Chen, M., and Lim, A. (2021). A deeper look at autonomous vehicle ethics: an integrative ethical decision-making framework to explain moral pluralism. Front. Robot. AI 8:632394. doi: 10.3389/frobt.2021.632394,
Ruan, W., Kang, S., and Song, H. (2020). Applying protection motivation theory to understand international tourists’ behavioural intentions under the threat of air pollution: a case of Beijing, China. Curr. Issues Tour. 23, 2027–2041. doi: 10.1080/13683500.2020.1743242
Rughiniș, C., Vulpe, S.-N., Țurcanu, D., and Rughiniș, R. (2025). AI at the knowledge gates: institutional policies and hybrid configurations in universities and publishers. Front. Comput. Sci. 7:1608276. doi: 10.3389/fcomp.2025.1608276
Sain, Z. H., and Lawal, U. S. (2024). Morality in higher education’s AI integration: examining ethical stances on implementation. J. Educ. Manag. Res. 3, 1–15. doi: 10.61987/jemr.v3i1.351
Scherer, R., Siddiq, F., and Tondeur, J. (2019). The technology acceptance model (TAM): a meta-analytic structural equation modeling approach to explaining teachers’ adoption of digital technology in education. Comput. Educ. 128, 13–35. doi: 10.1016/j.compedu.2018.09.009
Sher, M.-L., Talley, P. C., Yang, C.-W., and Kuo, K.-M. (2017). Compliance with electronic medical records privacy policy: an empirical investigation of hospital information technology staff. Inquiry 54. doi: 10.1177/0046958017711759
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int. J. Hum.-Comput. Stud. 146:102551. doi: 10.1016/j.ijhcs.2020.102551
Shrivastava, P. (2025). Understanding acceptance and resistance toward generative AI technologies: a multi-theoretical framework integrating functional, risk, and sociolegal factors. Front. Artif. Intell. 8:1565927. doi: 10.3389/frai.2025.1565927,
Smutny, P., and Schreiberova, P. (2020). Chatbots for learning: a review of educational chatbots for the facebook messenger. Comput. Educ. 151:103862. doi: 10.1016/j.compedu.2020.103862
Sommestad, T., Karlzén, H., and Hallberg, J. (2015). A meta-analysis of studies on protection motivation theory and information security behaviour. Int. J. Inf. Secur. Priv. 9, 26–46. doi: 10.4018/IJISP.2015010102
Strahovnik, V. (2018). Ethical education and moral theory. Metod. Ogl. 25, 11–29. doi: 10.21464/mo.25.2.1
Su, D. N., Truong, T. M., Luu, T. T., Huynh, H. M. T., and O’Mahony, B. (2022). Career resilience of the tourism and hospitality workforce in the COVID-19: the protection motivation theory perspective. Tour. Manag. Perspect. 44:101039. doi: 10.1016/j.tmp.2022.101039
Sukirman, S., Setiawan, A., Chamsudin, A., Yuliana, I., and Wantoro, J. 2024. Exploring student perceptions and acceptance of ChatGPT in enhanced AI-assisted learning. 2024 international conference Smart Computer IOT Machine Learning (SIML), 291–296.
Susnjak, T., and McIntosh, T. R. (2024). ChatGPT: the end of online exam integrity? Educ. Sci. 14:656. doi: 10.3390/educsci14060656
Vance, A., Siponen, M., and Pahnila, S. (2012). Motivating IS security compliance: insights from habit and protection motivation theory. Inf. Manag. 49, 190–198. doi: 10.1016/j.im.2012.04.002
Wang, H., Zhang, J., Luximon, Y., Qin, M., Geng, P., and Tao, D. (2022). The determinants of user acceptance of mobile medical platforms: an investigation integrating the TPB, TAM, and patient-centered factors. Int. J. Environ. Res. Public Health 19:10758. doi: 10.3390/ijerph191710758,
Xu, N., Wang, K.-J., and Lin, C.-Y. (2022). Technology acceptance model for lawyer robots with AI: a quantitative survey. Int. J. Soc. Robot. 14, 1043–1055. doi: 10.1007/s12369-021-00850-1
Yakubu, M. N., David, N., and Abubakar, N. H. (2025). Students’ behavioural intention to use content generative AI for learning and research: a UTAUT theoretical perspective. Educ. Inf. Technol. 30, 17969–17994. doi: 10.1007/s10639-025-13441-8
Yang, Y. (2024). Influences of digital literacy and moral sensitivity on artificial intelligence ethics awareness among nursing students. Healthcare 12:2172. doi: 10.3390/healthcare12212172,
Yilmaz, R., and Karaoglan Yilmaz, F. G. (2023). The effect of generative artificial intelligence (AI)-based tool use on students’ computational thinking skills, programming self-efficacy and motivation. Comput. Educ. Artif. Intell. 4:100147. doi: 10.1016/j.caeai.2023.100147
Zarifis, A., and Efthymiou, L. (2022) The four business models for AI adoption in education: giving leaders a destination for the digital transformation journey, in 2022 IEEE global engineering education conference (EDUCON), 1868–1872.
Zawacki-Richter, O., Marín, V. I., Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? Int. J. Educ. Technol. High. Educ. 16:39. doi: 10.1186/s41239-019-0171-0
Zhai, X. (2022). ChatGPT user experience: Implications for education. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4312418
Zhang, X., Han, X., Dang, Y., Meng, F., Guo, X., and Lin, J. (2017). User acceptance of mobile health services from users’ perspectives: the role of self-efficacy and response-efficacy in technology acceptance. Inform. Health Soc. Care 42, 194–206. doi: 10.1080/17538157.2016.1200053,
Zhou, H., and Li, Y. (2023). ChatGPT/AIGC and modernization of education governance: also on the transformation of education governance in the digital era. J. East China Norm. Univ. 41:36. doi: 10.16382/j.cnki.1000-5560.2023.07.004
Keywords: AIGC, behavioral intention, educational technology, perceived ethical concern, protection motivation theory, technology acceptance model
Citation: Yu T, Tian Y, Huang Q, Cheng Z and Zhang R (2026) Explaining adoption of AI tools in education: a dual-path model of ethical concern and functional value. Front. Psychol. 16:1735913. doi: 10.3389/fpsyg.2025.1735913
Edited by:
Shujin Zhong, University of North Florida, United StatesReviewed by:
Fernando Marmolejo-Ramos, Flinders University, AustraliaAlex Zarifis, University of Southampton, United Kingdom
Mohammad Mominur Rahman, Hamad bin Khalifa University, Qatar
Copyright © 2026 Yu, Tian, Huang, Cheng and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Ru Zhang, emhhbmdydUB1bm4uZWR1LmNu