Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 09 December 2025

Sec. Digital Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1709370

This article is part of the Research TopicThe Role of AI in Transforming Literacy: Insights into Reading and Writing ProcessesView all 7 articles

The paradox of productive irritation: mapping the stress–coping loop that sustains generative-AI engagement


Amrullah SatotoAmrullah Satoto1Noor Raihan Ab HamidNoor Raihan Ab Hamid1Norlina KamudinNorlina Kamudin1Andri Dayarana K. Silalahi
Andri Dayarana K. Silalahi2*
  • 1Department of Doctor of Business Administration, Asia e University, Subang Jaya, Malaysia
  • 2Department of Marketing and Logistics Management, College of Management, Chaoyang University of Technology, Taichung, Taiwan

Generative AI promises academic efficiency yet often delivers flawed answers, leaving users “irritated but engaged.” To explain this paradox, we merge the Transactional Model of Stress and Coping with Cognitive Dissonance Theory and survey Indonesian and Taiwanese academics who had used ChatGPT for at least a month (N = 388). Partial Least Squares analysis shows that response failures and low AI literacy sharply raise frustration; frustration, in turn, both directly sustains continuance intention and indirectly does so through heightened resistance to change. The indirect route dominates in Indonesia, where higher switching costs foster inertia, whereas Taiwanese users convert frustration into exploratory recommitment. These findings re-cast resistance as an adaptive buffer rather than a mere barrier and reveal culture-specific coping paths that keep imperfect AI in daily workflows. The study advances stress-and-dissonance theory integration and guides institutions toward balanced strategies combining accuracy auditing, literacy scaffolding, and context-sensitive expectation management.

1 Introduction

Generative artificial intelligence systems such as ChatGPT have moved from experimental curiosities to everyday fixtures on university campuses worldwide. Faculty automate administrative e mail, doctoral candidates refine literature reviews, and students query LLMs for instant explanations of complex theories (Chen et al., 2025; Kiryakova and Angelova, 2023; Song and Song, 2023; Parker et al., 2024). Advocates point to sharp gains in productivity, richer multimodal feedback, and democratized access to academic support for non-native English speakers (Ghio, 2024). Massive open datasets enable ChatGPT to mimic an all-purpose tutor that never sleeps, ostensibly leveling the playing field in resource constrained institutions. Yet the same statistical machinery that produces eloquent prose also fabricates non-existent citations, misclassifies disciplinary terminology, and occasionally embeds cultural or gender biases. Concerns about academic integrity, epistemic trust, and unequal power relations surface whenever AI outputs are presented as authoritative knowledge (Hsu and Silalahi, 2024). Thus, higher education finds itself negotiating a delicate balance: how to harness LLM advantages without eroding foundational values of critical thinking and evidence based reasoning.

In East and Southeast Asia, governments and universities have aggressively piloted generative AI to reinforce their status as digital economy hubs. Indonesia's Ministry of Education recently issued guidelines encouraging lecturers to use ChatGPT for rubric-aligned assessment feedback, while Taiwan's National Science and Technology Council funds interdisciplinary centers devoted to LLM pedagogy (Li et al., 2024). Early adopters report tangible “pros”: lecturers reclaim hours previously spent on repetitive grading; postgraduate researchers accelerate code debugging and grant writing; and international students receive bilingual clarifications personalized to local curriculum (Gao et al., 2024). However, empirical audits reveal persistent “cons.” In this article, we use hallucinations to mean plausible, confidently worded content or citations that are fabricated or not grounded in verifiable sources. Hallucinated jurisprudence appears in law essays; outdated clinical protocols slip into nursing case studies; and over-reliance on AI scaffolding reduces metacognitive self-regulation among undergraduates (Pütz and Esposito, 2024). Moreover, uneven digital fluency exacerbates inequities: elite urban campuses integrate prompt-engineering workshops, whereas rural colleges struggle with bandwidth limits and limited faculty training. These contrasting outcomes underscore the urgency of examining psychological and contextual mechanisms shaping sustained AI engagement in education.

Two intersected risk factors dominate faculty and student complaints. First, response failures—irrelevant, implausible, or factually wrong outputs—undermine learning objectives and damage confidence in digital resources (Zhou and Wang, 2025). Second, pervasive AI-literacy deficiencies leave users ill-equipped to craft effective prompts, assess probabilistic responses, or execute verification routines (Long and Magerko, 2020). These deficits are not trivial: one study of synchronous online mathematics classes found GPT-4o mislabeled more than 20 percent of discourse-coding tasks, yet novice instructors accepted the labels uncritically. Repeated exposure to such errors induces user frustration, a negative affective reaction linked to technostress, disengagement, and burnout (Tarafdar et al., 2019). Paradoxically, anecdotal evidence suggests that many academics continue to rely on ChatGPT day after day, even while voicing dissatisfaction on faculty forums. This coexistence of frustration and persistence raises an intellectually puzzling question: why do users stick with a tool that consistently violates their expectations?

Most empirical studies approach the issue through a technical or utilitarian lens, focusing on algorithmic accuracy, interface tweaks, or traditional technology acceptance variables such as perceived usefulness and ease of use (Ma et al., 2025; Camilleri, 2024). A smaller body explores algorithmic aversion and automation bias, yet seldom follows users beyond initial trial periods (Jones-Jang and Park, 2023). Critically, scholars have not mapped the full cognitive–emotional cascade from repeated AI errors to long term continuance intentions, nor have they compared these processes across cultural contexts where social norms about authority, uncertainty, and face differ markedly (Zhong et al., 2024). Furthermore, moderators such as instructional role (faculty vs. student), prior coding experience, or institutional mandates remain understudied even though they plausibly amplify or attenuate frustration. Without such nuance, prescriptions risk oversimplification—for example, recommending generic prompt engineering workshops that fail to address deeper motivational conflicts or identity concerns.

To bridge these gaps, we integrate Cognitive Dissonance Theory (Festinger, 1957) with the Transactional Model of Stress and Coping (Lazarus, 1984). When users who view ChatGPT as cutting-edge encounter glaring inaccuracies, they experience dissonance between favorable expectations and unfavorable evidence. They can resolve this discomfort either by abandoning the system or by rationalizing its flaws. Concurrently, the stress-coping framework posits that users appraise response failures as either controllable (through better prompts) or uncontrollable (an unavoidable LLM limitation). Where failures seem uncontrollable but benefits remain high, individuals may resort to emotion-focused coping, tolerating frustration while resisting change. The two theories together predict a sequence whereby response failures and literacy gaps elevate frustration, which heightens resistance to change, ultimately sustaining continuance intention despite ongoing dissatisfaction. Such an integrated perspective has yet to be empirically validated in generative-AI settings.

Guided by this dual-theory lens, the present study advances four objectives. First, we quantify how response failures and AI-literacy deficiency independently influence perceived frustration among academic users in Indonesia and Taiwan. Second, we test whether frustration directly fosters resistance to change, understood as a dispositional and situational reluctance to modify established routines (Ford and Ford, 2009). Third, we examine whether resistance mediates the frustration–continuance link, providing a mechanism through which negative emotions paradoxically support ongoing use. Fourth, we conduct multi-group analysis to determine whether pathway strengths differ across the two cultural contexts, thus enriching cross-cultural human–computer-interaction literature (Oreg et al., 2008). Addressing these aims contributes to theory by extending dissonance and stress models into AI-mediated learning, and to practice by identifying leverage points—such as targeted literacy training or expectation management—to transform reluctant persistence into informed, confident usage.

The study canvassed academics in Indonesia and Taiwan who had integrated ChatGPT-Plus into routine scholarly and administrative activities for at 1 month. Drawing on established multi-item instruments for response failures, AI literacy, frustration, resistance to change, and continuance intention (Long and Magerko, 2020; Oreg et al., 2008), the survey captured both system-centered and user-centered antecedents of affective reactions to large language models. Analyses confirmed a coherent psychological sequence: frequent erroneous or irrelevant outputs, together with limited metacognitive skill for diagnosing those errors, heightened users' frustration; frustration, when unaddressed, coalesced into a reluctance to modify entrenched workflows; and this reluctance paradoxically sustained engagement with the very technology that provoked dissatisfaction. Cultural context refined this progression. Participants from Taiwan—embedded in a dense digital ecosystem—showed greater readiness to experiment with alternative tools once frustration surpassed a personal tolerance threshold, whereas their Indonesian counterparts tended to absorb irritation, arguably owing to higher switching costs and stronger institutional endorsement of ChatGPT. In this sense, resistance to change emerged not merely as a behavioral barrier but as an emotional buffer that enables users to reconcile negative affect with perceived instrumental value.

These findings unravel both the promise and the perils of generative AI in higher education. Respondents credited ChatGPT with substantial time savings, broader access to discipline specific exemplars, and enhanced linguistic confidence, echoing recent evidence of productivity gains (Chen et al., 2025; Song and Song, 2023). Yet the same users reported cognitive overload from continuous fact checking, ethical unease regarding authorship, and anxiety about declining critical thinking opportunities—concerns consistent with critiques by Archibald and Clark (2023) and Sundar and Liao (2023). The data therefore suggest that institutional interventions should move beyond interface refinements toward strategies that address the cognitive–emotional nexus: inoculation messages that normalize occasional hallucinations (Weiler et al., 2022), peer coaching circles that cultivate prompt engineering literacy, and uncertainty dashboards that surface confidence scores in real time. Conceptually, the research extends cognitive dissonance and stress coping theories by showing how resistance can operate as an adaptive mechanism rather than a simple impediment, flagging latent dissonance before it manifests as wholesale rejection. Future work should employ longitudinal designs, experimental expectation management, and multicounty comparisons across alternative large language models to guide the evolution of generative AI from a disruptive novelty to a trusted, equitable partner in academic practice.

2 Literature review

2.1 Previous studies and gaps identification

Contemporary examinations of generative-AI continuance have largely foregrounded factors that encourage prolonged interaction, yet they do so from divergent analytical vantage points as shown in Table 1. Drawing on motivation theory, Wolf and Maier (2024) reveal that sustained ChatGPT use in everyday life materializes only when a distinctive “recipe” of intrinsic and extrinsic motives co-occurs; in their configurational analysis, perceived ease of use and a sense of technological novelty emerge as necessary but not sufficient catalysts, underscoring the interdependence of experiential and utilitarian rewards. Extending this conversation, Kim and Baek (2025) import the Investment Model from relationship science to show that functional utility and hedonic playfulness jointly cultivate satisfaction and psychological commitment, which, in turn, translate into stronger intentions to keep engaging with the chatbot. A complementary, content-centric lens is provided by Mun and Hwang (2025), who demonstrate that the accuracy, richness, timeliness, and relevance of ChatGPT's responses significantly elevate perceived usefulness and source trust—two pivotal mediators of continuance in information-systems research.

Table 1
www.frontiersin.org

Table 1. Previous studies and gaps.

A nascent but crucial second stream interrogates the hindrances to generative-AI persistence by foregrounding negative affect and cognitive conflict. Employing Behavioral Reasoning Theory, Khizar et al. document how academic researchers weigh efficiency dividends against accuracy doubts and ethical qualms, revealing that the latter can generate resistance potent enough to override adoption incentives. In a related but non-AI context, Marikyan et al. (2023) find that when smart-home technologies underperform, users experience cognitive dissonance and deploy coping tactics—such as expectation adjustment or intensified information search—to restore psychological equilibrium, thereby continuing to use devices they simultaneously find disappointing. These insights suggest that abandonment is not the inevitable outcome of dissatisfaction; rather, users may actively rationalize or compensate for technological shortcomings. However, the extant literature has yet to explicate how such dissonance-reduction processes operate in the generative-AI arena, where systematic response failures and widespread literacy deficits are commonplace. Moreover, the role of culture in shaping whether irritation culminates in rejection or in adaptive persistence remains virtually unexplored, signaling a substantive theoretical gap in post-adoption technology research.

The present study responds directly to these omissions by integrating Cognitive Dissonance Theory with the Transactional Model of Stress and Coping to examine Indonesian and Taiwanese tourists who remain “irritated yet engaged” with ChatGPT despite frequent inaccuracies and limited AI know-how. By modeling perceived frustration, resistance to change, and continuance intention simultaneously, we illuminate a previously overlooked mechanism whereby negative emotional arousal is channeled into coping strategies—such as query reformulation, expectation lowering, or repurposing the tool—thus allowing users to reconcile disconfirmed expectations with the platform's perceived instrumental value. Our cross-cultural design further reveals that the strength and direction of these coping trajectories vary meaningfully between user populations, highlighting the moderating influence of contextual factors on dissonance resolution. Consequently, the study not only extends the generative-AI continuance literature beyond its prevailing positivity bias but also reframes resistance as a diagnostic signal rather than a mere adoption barrier. In doing so, it offers a more nuanced, culturally sensitive account of why and how users elect to persist with AI systems that routinely frustrate them, thereby filling the theoretical and empirical gaps identified in prior research.

2.2 Cognitive dissonance

Cognitive-dissonance theory states that people feel psychological discomfort when their beliefs, attitudes, or behaviors do not align, and they act to reduce that tension (Festinger, 1957). Laboratory and neuroimaging research shows that even small inconsistencies increase autonomic arousal and activate neural conflict networks, confirming the robustness of the effect across methods and samples (Harmon-Jones et al., 2015; Izuma and Murayama, 2019). Meta-analytic evidence further indicates that the strength of dissonance reactions depends on how central the disconfirmed belief is, whether justifications are available, and how skilled individuals are at self-regulation (Cooper, 2019; Gawronski and Brannon, 2020). Recent studies in professional settings such as finance, medical diagnostics, and data governance report that decision makers expend more cognitive effort to justify inconsistent outcomes when reputation or resource commitments are high (Ryoo et al., 2025). These findings underline that dissonance is an everyday regulatory force, not an abstract anomaly, and they highlight its relevance for technology users who invest time, credibility, and emotional energy in complex digital tools.

When generative AI outputs clash with a user's expectation of accuracy, a belief–experience mismatch forms the core of dissonance. A learner may believe that ChatGPT improves productivity yet repeatedly see answers that contain factual errors. Studies on IT value misalignment show users tackle this conflict by focusing on successful outcomes, discounting failures, or redefining what counts as a satisfactory result (Dinev et al., 2009). Such selective attention is most likely when viable substitutes are scarce or switching costs are high, conditions that often exist in academic work constrained by deadlines and institutional licenses. Therefore, continuance of use cannot be explained purely by instrumental benefits; it must also account for the silent cognitive work people undertake to keep their attitudes and experiences in harmony.

Dissonance theory also clarifies why negative experiences do not always lead to abandonment. Service-recovery research shows that customers who face performance lapses often attribute problems to situational factors rather than revise a favorable global attitude toward the provider (Choi et al., 2021). Longitudinal diary studies of writing tools report that students encountering AI errors tell themselves the model is “still learning” or that better prompts will fix the problem, protecting their original positive view (Nguyen et al., 2024; Tsai et al., 2024). Stone and Taylor (2021) note that such self-persuasion intensifies when past benefits remain salient; speed and idea generation, for instance, can outweigh annoyance from occasional inaccuracies. Ultimately, cognitive dissonance offers a direct explanation for the paradox of users who stay engaged with ChatGPT while simultaneously expressing irritation at its shortcomings.

2.3 Transactional model of stress and coping

The transactional model proposed by Lazarus (1984) defines stress as a process in which individuals appraise environmental demands and compare them with perceived coping resources. Primary appraisal asks whether an event threatens wellbeing, whereas secondary appraisal evaluates control and support options (Carver and Connor-Smith, 2010). A demand becomes stressful only when it is judged as personally relevant and beyond current capability. Longitudinal studies in health, education, and organizational science show that these appraisals predict physiological arousal, affect, and adjustment better than objective workload indicators (Crum et al., 2017; Skinner et al., 2003). Because the model centers on cognition, it has become essential in information systems research, where new technologies often outpace user skills and reshape perceptions of control (Ayyagari et al., 2011; Maier et al., 2022).

Generative AI heightens both demand and uncertainty. Hallucinated content, opaque reasoning paths, and frequent interface updates increase primary appraisals of threat, while limited knowledge of prompt design and fact-checking reduces secondary appraisals of coping capacity (Pullins et al., 2020). Empirical work on technostress finds that this imbalance predicts emotional exhaustion and lower job satisfaction if unmanaged (Califf et al., 2020). In education, students exposed to inconsistent AI feedback report higher cognitive load and lower self-efficacy, illustrating how demand–resource gaps erode engagement (Ragu-Nathan et al., 2008). Therefore, continued ChatGPT use amid frequent frustration requires users to restore balance, either by acquiring new skills or redefining task importance, before stress undermines persistence.

Coping responses follow two main paths. Problem-focused coping aims to alter the situation—users may refine prompts, consult tutorials, or adopt third-party verification tools. Emotion-focused coping regulates feelings—users might lower expectations, seek peer support, or interpret errors as learning moments (Folkman and Moskowitz, 2004). Research in corporate IT adaptation shows that targeted training and peer mentoring strengthen problem-focused coping and reduce technostress symptoms. Where institutional support is absent, emotion-focused strategies become critical; users reframe frustrations to maintain motivation without abandoning the tool (Carter et al., 2020). In the context of ChatGPT, evidence suggests an iterative cycle: initial technical adjustments lessen error frequency, while residual inaccuracies are reinterpreted as acceptable trade-offs. This oscillation brings perceived demands and resources back into equilibrium, explaining why many users remain committed to generative AI even when systemic faults persist.

3 Hypothesis development

3.1 Perceived AI frustration

Perceived AI frustration denotes the irritation, annoyance, and disappointment that arise when an AI application blocks rather than facilitates a user's goals. In human–computer interaction research, frustration emerges whenever an action–outcome mismatch undermines task completion (Ayyagari et al., 2011). Within generative AI, the mismatch intensifies because users assume a conversational agent will deliver fast, accurate answers; repeated bias, irrelevance, or factual error violates this assumption, provoking negative affect (Komiak and Benbasat, 2006). Such affect is more than transient discomfort: a study show that frustration can erode confidence, heighten cognitive load, and diminish perceived usefulness (Bhattacherjee, 2001). We therefore define perceived AI frustration as the user's appraisal that ChatGPT impedes rather than aids performance, producing a blend of irritation and disappointment. This definition integrates prior work on techno stressors that interrupt workflow (Ayyagari et al., 2011) with evidence that conversational agents amplify emotional responses when their dialogue appears authoritative but proves defective (Wang G. et al., 2023). Conceptually, frustration is the first emotional signal that the expected value of AI support is slipping below an acceptable threshold, potentially triggering coping or rejection.

From a technostress lens, perceived AI frustration is the joint outcome of human side and system side stressors. On the human side, AI literacy deficiency—limited skills in prompt engineering or critical evaluation—mirrors “techno complexity,” where users feel inadequate when technology outpaces their competence (Tarafdar et al., 2020). Each poorly crafted prompt or misunderstood response reinforces self-doubt, escalating annoyance. On the system side, response failures—factual inaccuracies, hallucinations, or opaque citations—align with “techno uncertainty,” the stress induced by unpredictable system behavior (Stein et al., 2019). Generative AI's probabilistic outputs magnify that uncertainty, as even expert users cannot guarantee accuracy. Empirical work confirms that simultaneous exposure to complexity and uncertainty produces the strongest frustration profiles, reducing satisfaction and trust (Stein et al., 2019). Thus, perceived AI frustration materializes when users face the double burden of limited mastery and unreliable performance, encapsulating the emotional cost of navigating human and technological constraints.

In the present study, perceived AI frustration is pivotal for explaining why students, faculty, and administrative staff continue using ChatGPT despite chronic shortcomings. We treat frustration as an immediate negative appraisal that surfaces when ChatGPT offers incorrect references or context poor explanations during coursework preparation or policy drafting. Drawing on coping theory, we posit that this appraisal can trigger secondary emotions—such as resistance to change or dissonance driven rationalization—that paradoxically sustain use (Bhattacherjee, 2001). Preliminary interviews indicate that staff often “double check” ChatGPT outputs instead of abandoning the tool, while students reformulate prompts to salvage prior investment of time. Such behaviors echo findings that users sometimes increase, rather than curtail, technology engagement to regain control after failure (Liang et al., 2023). By modeling perceived AI frustration as both a symptom of misaligned expectations and a catalyst for coping strategies, our research illuminates the emotional mechanics underpinning continued reliance on generative AI in higher education.

3.2 Continuance intention

Continuance intention (CI) denotes a user's resolve to keep employing a technology after initial adoption and has long been treated as the best predictor of actual long term use (Bhattacherjee, 2001). Within information system scholarship, CI is chiefly a function of perceived usefulness and perceived ease of use—if a system increases efficiency or accuracy and entails little effort, users embed it in daily routines (Davis, 1989). Recent large sample evidence confirms the pattern for generative AI: Wolf and Maier (2024) show that ChatGPT continuance hinges on a dual calculus of extrinsic payoff (task performance) and intrinsic satisfaction, with ease of use emerging as a necessary condition. Similar benefits drive sustained adoption of analytics tools in knowledge work (Yan et al., 2021). For many professionals, ChatGPT expedites drafting, summarizing, and language polishing, reinforcing the perception of high net value. Grounded in this literature, the present study treats CI as a cognitive evaluation balancing experienced advantages against costs and asks how a seemingly negative emotional state—AI induced frustration—fits within that evaluation, especially when unique functional gains remain salient.

Contrary to the intuitive view that frustration invariably discourages technology use, research on IT stress reveals a paradoxical engagement effect: when users still perceive irreplaceable value, frustration can stimulate deeper involvement (Beaudry and Pinsonneault, 2010). In generative AI settings, inaccuracies, hallucinations, or slow response times provoke irritation, yet many users respond with problem focused coping—refining prompts, consulting external sources, or seeking peer advice—rather than abandoning the tool. Studies of social media fatigue show a comparable “stickiness” when platforms fulfill critical needs despite stress (Cheikh-Ammar, 2020). Early work on ChatGPT echoes this pattern, finding that professionals persevere because time savings and creative support outweigh annoyance (Silalahi and Demirci, 2025). Coping efforts help users recalibrate expectations and regain a sense of control, which, in turn, reinforces commitment. Building on coping theory and these emerging empirical insights, we argue that higher levels of perceived AI frustration can coexist with—and even amplify—intentions to continue using ChatGPT. Hence, we posit.

H1: Perceived AI frustration positively influences continuance intention.

3.3 Resistant to change

Resistance to Change refers to a relatively stable inclination to preserve existing routines and to avoid alterations that may increase effort, uncertainty, or perceived loss of control. Conceptual work in organizational psychology treats it as a dispositional trait comprising routine seeking, emotional reaction to change, short-term focus, and cognitive rigidity (Oreg, 2006; Oreg et al., 2008). Information-systems scholars, however, emphasize a situational dimension that surfaces once a technology becomes embedded in day-to-day work, making its removal itself a disruptive “change event” (Lapointe and Rivard, 2005). Generative AI now occupies that embedded position for many knowledge workers; abandoning ChatGPT entails learning substitute tools, revising workflows, and risking temporary productivity loss. Accordingly, the present study theorizes resistance to change not as opposition to adopting ChatGPT but as a preference for retaining it. By framing resistance as status-quo maintenance, the research extends post-adoption literature that has traditionally focused on switching intentions in commoditized services rather than on reluctance to exit an incumbent AI assistant that still delivers net value despite imperfections.

When the incumbent system is ChatGPT, dispositional and situational resistance should translate into stronger continuance intention. Status quo bias models argue that potential losses loom larger than comparable gains, causing individuals to overweight switching costs relative to benefits (Samuelson and Zeckhauser, 1988). Empirical tests in ERP and knowledge management contexts demonstrate that employees who score high on routine seeking are less likely to replace familiar software even when superior alternatives exist (Kim and Kankanhalli, 2009; Jasperson et al., 2005). Similar mechanisms apply to students or faculty who has integrated ChatGPT into writing, coding, or brainstorming activities; abandoning the tool would create workflow discontinuities and cognitive overhead. Social influence compounds this calculus: if peer networks continue to rely on ChatGPT, deviation imposes coordination and reputational costs (Sun, 2014). Drawing on these convergent strands, the study predicts that stronger resistance will foster a deliberate choice to remain with the incumbent AI solution. Therefore:

H2: Resistance to change positively influences continuance intention.

Threat-appraisal theory posits that negative affect heightens defensive responses designed to restore psychological equilibrium (Beaudry and Pinsonneault, 2010). Recurrent ChatGPT errors—hallucinations, citation failures, or biased outputs—violate user expectations and trigger frustration, a high-arousal emotion associated with blocked goal attainment (Gross, 2015). Laboratory studies on algorithm aversion show that observing a single system failure significantly reduces openness to future algorithmic advice (Dietvorst et al., 2015), while field data indicate that such frustration increases reliance on familiar usage patterns rather than encouraging exploration (Logg et al., 2019). Hence, frustration fuels skepticism toward experimenting with novel prompts or alternative tools, consolidating a defensive commitment to the status quo. This reasoning aligns with research on IT change projects where negative emotions predict overt or covert resistance behaviors (Bhattacherjee and Hikmet, 2007). Therefore, heightened frustration is expected to strengthen users' inclination to cling to existing ChatGPT routines.

H3: Perceived AI frustration positively influences resistance to change.

Cognitive dissonance theory argues that individuals facing a disconfirmed expectation often restore consonance by re-evaluating chosen vs. rejected alternatives, thereby bolstering attachment to the current option (Harmon-Jones and Harmon-Jones, 2007). Post-adoption studies of mobile services and social platforms corroborate that service failures can, paradoxically, enhance loyalty when exit barriers are salient and when users attribute problems to correctable factors (Turel et al., 2010). In this setting, frustration highlights performance risk yet simultaneously magnifies the anticipated learning costs of switching, channeled through resistance to change. Resistance thus becomes the psychological mechanism that reconciles negative affect with pragmatic reliance, converting irritation into renewed commitment. Consistent with this logic and with prior evidence that resistance mediates the impact of stressors on technology continuance (Bhattacherjee and Barfar, 2011), the study proposes:

H4: Resistance to change mediates the positive relationship between perceived AI frustration and continuance intention.

3.4 Response failure

Response failure denotes those moments when a generative AI delivers content that is inaccurate, incomplete, or unverifiable, thereby breaching the user's baseline expectation of reliability. Prior human–computer interaction work in high impact outlets demonstrates that even a single conspicuous error can weaken algorithmic trust and prompt users to reevaluate the tool's value proposition (Dietvorst et al., 2018; Logg et al., 2019). Within large language model contexts, such breakdowns stem from limited knowledge cut offs, hallucinations, or misinterpretation of prompts, all of which foreground the technology's fallibility. Scholars have linked these failures to behavioral resistance, noting that users often revert to familiar resources—human colleagues, search engines, printed manuals—once confidence in an AI assistant deteriorates (Cao et al., 2024). Building on this stream, the present study treats response failure as a salient technological stressor that can erode perceived usefulness and heighten user inertia. By integrating this construct into a stress–coping framework, we extend prior adoption research that has typically focused on performance expectancy and effort expectancy while under specifying the impact of repeated system errors.

Repeated exposure to response failures is likely to intensify perceived AI frustration because each faulty output resets the user's learning curve and consumes additional cognitive resources. Empirical evidence from service automation studies shows that unresolved chatbot errors elevate negative affect, reduce flow, and prompt compensatory behaviors such as rephrasing questions or abandoning the session (Shin et al., 2024). Cognitive appraisal theory suggests that when performance deficits are both salient and recurrent, users label the situation as obstructive rather than challenging, producing stronger frustration responses (Smith and Lazarus, 1993). In the educational and workplace domains, users have reported heightened annoyance when ChatGPT fabricates citations or offers outdated statistics, emphasizing that technical shortcomings are experienced not as minor inconveniences but as direct barriers to task completion (Issa et al., 2024). These insights indicate a robust positive linkage between response failure and perceived AI frustration. Accordingly, this study posits the following hypothesis:

H5: Response failure significantly increases perceived AI frustration.

3.5 AI literacy deficiency

AI literacy deficiency denotes a shortfall in the knowledge and skills required to understand how an artificial-intelligence system operates, recognize its limitations, and interact with it effectively. Recent work in leading information-systems outlets shows that users who possess richer mental models of AI display stronger critical-evaluation skills and experience fewer usage barriers (Long and Magerko, 2020; Shneiderman, 2020). In contrast, individuals with poor AI literacy often rely on guesswork when formulating prompts or interpreting outputs, which leads to misaligned expectations and decision errors (Tarafdar et al., 2024). Within workplace and educational settings, inadequate AI literacy has been linked to lower task performance and heightened cognitive burden because users struggle to judge the reliability of generated content (Chiang and Chen, 2022). By situating AI literacy deficiency as a core human-side stressor, the present study extends this stream by investigating its emotional repercussions during sustained ChatGPT use. Understanding the depth of this deficiency is essential, because it frames how users appraise system behavior and whether they possess the competencies needed to manage AI interactions successfully.

Low AI literacy is expected to act as a catalyst for negative affect during generative-AI encounters. When users cannot predict why ChatGPT produces opaque, incomplete, or biased answers, they must expend additional effort to verify information—an effort shown to deplete attentional resources and elevate irritation (Pullins et al., 2020). Empirical evidence from high-impact journals indicate that limited digital expertise amplifies technostress, which manifests as frustration and feelings of inadequacy when technology behaves unpredictably (Ayyagari et al., 2011). In AI contexts, this frustration is further intensified because the probabilistic nature of large language models obscures causal explanations for errors, leaving unskilled users with few strategies to regain control (Benbya et al., 2021). Consequently, each unresolved interaction compounds unease and diminishes trust, reinforcing a cycle of negative affect. Drawing on these insights, the present research posits a direct, positive association between AI literacy deficiency and perceived AI frustration. Accordingly, the following hypothesis is advanced:

H6: AI literacy deficiency significantly increases perceived AI frustration.

As illustrated in Figure 1, the model positions perceived AI frustration as the fulcrum linking external stressors to post adoption behavior. Drawing on the transactional view of stress (Lazarus, 1984), the framework differentiates human origin stressors—users' limited AI literacy—and system origin stressors—ChatGPT's response failures. When either source of strain is appraised as taxing or unpredictable, it elicits frustration, an affective state frequently observed in technology episodes that lack transparency or controllability (Pullins et al., 2020). The right side of the figure embeds this emotion within a cognitive dissonance cycle (Festinger, 1957): users who simultaneously value ChatGPT's utility yet feel aggravated by its flaws must restore psychological consistency. Two behavioral routes are depicted. One route channels frustration directly into a renewed determination to keep using the tool, reflecting a self-justifying investment of additional effort often noted in digital work settings. The other directs frustration into resistant to change, a coping stance characterized by a preference for familiar routines; once established, this inertia further sustains ongoing use. Collectively, the model portrays continuance as an adaptive balance between instrumental commitment and change averse coping, both set in motion by frustration that originates in human and system deficiencies (see Figure 1).

Figure 1
Diagram illustrating the transactional model of stress and coping related to AI frustration. It includes components such as AI literacy deficiency and response failure leading to perceived AI frustration. This frustration interacts with coping mechanisms like continuance intention and resistance to change, forming a cognitive dissonance loop. Arrows indicate various hypotheses and relationships among these elements.

Figure 1. The study's conceptual framework.

4 Methods

4.1 Operationalization and measures

Response failure reflects episodes in which ChatGPT delivers content that is incomplete, inaccurate, or contextually irrelevant, thereby interrupting the flow of interaction and eroding conversational coherence (Klein et al., 2002; Weiler et al., 2022). AI literacy deficiency denotes the absence of skills needed to formulate effective prompts, interpret probabilistic outputs, and judge ethical or privacy implications; it builds on foundational work on digital and AI literacy in workplace settings (Long and Magerko, 2020; Wang Q. et al., 2023). Perceived AI frustration captures the irritation and impatience that surface when repeated attempts to obtain satisfactory answers fail, a sentiment long recognized in studies of computer induced stress (Lazar et al., 2006). Continuance intention is conceptualized as a forward looking decision to keep using ChatGPT beyond initial experimentation, echoing the expectation–confirmation logic articulated by Bhattacherjee (2001). Finally, resistant to change refers to a dispositional preference for maintaining familiar routines in the face of new technologies, consistent with cross cultural evidence on change aversion (Oreg et al., 2008).

Scales were adapted to the ChatGPT context through a systematic three step procedure. First, wording was tailored to reflect generative AI interactions— for example, the response failure item “The system's answer was inadequate” was rephrased as “ChatGPT's answer was inadequate for my task.” Second, two bilingual researchers translated all items into Indonesian and Mandarin using a back translation protocol to ensure semantic equivalence. Third, a panel of five domain experts reviewed the drafts for content validity and cultural appropriateness, resolving discrepancies by consensus. Each construct was operationalized with three to five indicators rated on a seven point Likert scale anchored by 1 = “strongly disagree” and 7 = “strongly agree,” a response format shown to maximize variance in cross cultural IS surveys. Where necessary, negatively worded items were reverse scored to reduce acquiescence bias, and minor lexical changes were introduced to maintain consistent tone and reading level across languages.

4.2 Sampling and data collection procedure

Purposive sampling was selected because the research questions required respondents who could speak from direct, sustained experience with generative AI in an academic setting rather than a general population unfamiliar with the tool (Patton, 2015). Deans in business, computer science, and education faculties at five Indonesian and four Taiwanese universities were first approached to circulate an invitation that outlined inclusion criteria: (a) age 18 years or older, (b) weekly ChatGPT use for at least the past month, and (c) recollection of a recent instance in which the system produced a confusing, inaccurate, or otherwise frustrating response. This criterion-based approach ensured that both cognitive evaluations of the technology and the emotional reactions central to the study were salient for participants at the time of data collection. To maximize cultural validity, the questionnaire—developed originally in English—was translated into Indonesian and Mandarin using Brislin's back-translation procedure, followed by a reconciliation meeting with two bilingual doctoral candidates and one professional translator to resolve idiomatic inconsistencies.

Data were gathered during April 2025 via a web-based instrument hosted on Google Forms, a platform chosen for its compatibility with institutional firewall policies in both countries. The survey link was distributed through university listservs and social-media channels popular among academics, including LINE in Taiwan and WhatsApp in Indonesia. Participants were presented with an electronic informed-consent statement detailing the study's purpose, the voluntary nature of participation, data-anonymization procedures, and withdrawal rights. Of the 436 forms returned, 48 were excluded for incompleteness or failure to meet screening questions, resulting in 388 valid cases −199 from Indonesia and 189 from Taiwan.

4.3 Analysis technique

Structural relationships were examined with Partial Least Squares Structural Equation Modeling executed in SmartPLS 4.1 because PLS-SEM accommodates complex, prediction-oriented models, works reliably with non-normal data, and maintains statistical power in samples below 500 cases (Hair et al., 2017). Prior to model estimation, Harman's single-factor test in SPSS 26 indicated that no single component accounted for the majority of covariance, mitigating concerns about common-method bias (Baumgartner et al., 2021). The measurement model was then evaluated by verifying that item loadings exceeded 0.70 and that each construct met the convergent-validity benchmarks of composite reliability > 0.70 and average variance extracted > 0.50 (Fornell and Larcker, 1981). Discriminant validity was confirmed when the square root of each construct's AVE surpassed its inter-construct correlations and when all heterotrait–monotrait ratios fell below the conservative 0.85 threshold (Henseler et al., 2015). Significance of path coefficients was assessed via a 5,000-resample bias-corrected bootstrap, and overall model adequacy was gauged with the Goodness-of-Fit index, which jointly considers measurement quality and explanatory power (Tenenhaus et al., 2005).

5 Results

5.1 Sample profile

The demographic profile highlights pronounced national contrasts that may shape both cognitive appraisals of ChatGPT and downstream coping trajectories. Indonesian respondents are evenly split by gender and show a broad age range, with almost one half aged 25 years or older and one-fifth holding postgraduate credentials. This diversity suggests that generative AI adoption in Indonesia has diffused beyond early adopter student segments into faculty and mature learners, a pattern consistent with diffusion theory assertions that perceived relative advantage accelerates cross cohort uptake. By contrast, the Taiwanese cohort is predominantly female (72%) and very young (94% aged 18–24), mirroring campus enrolment demographics and reflecting Taiwan's high tertiary education penetration. The concentration of bachelor level users in Taiwan (52%) vs. high school graduates in Indonesia (52%) further indicates disparate baseline digital capital, which prior work links to differential technology self-efficacy and stress perceptions. Such structural differences provide a useful backdrop for interpreting subsequent cross cultural variance in frustration and resistance mechanisms.

Usage characteristics reinforce these demographic distinctions. More than 73% of Taiwanese participants has engaged with ChatGPT for over 6 months, yet only 12% pay for ChatGPT-Plus, suggesting a preference for exploratory, low-cost experimentation typical of digitally mature ecosystems. Indonesians display a shorter usage history −39 % report 3 months or less—but are twice as likely to subscribe to the premium tier, implying stronger perceived utility that justifies monetary investment, possibly to compensate for limited institutional resources. Purpose patterns also diverge: academic tasks dominate in both settings, but professional preparation is notably higher in Indonesia (31% vs. 18%), aligning with labor-market pressures that drive skill acquisition outside formal courses. Sample profile provided in Table 2.

Table 2
www.frontiersin.org

Table 2. Sample profile.

5.2 Common method variance

Given that all variables were collected from the same respondents at a single point in time, the potential for CMV was assessed with two complementary procedures. First, a Harman single-factor test was performed in SPSS by loading all manifest indicators onto an unrotated exploratory factor solution. The largest latent factor yielded an eigenvalue of 6.343 and accounted for 35.2 % of the total variance—well below the 50% benchmark that signals serious CMV threat (Baumgartner et al., 2021; Podsakoff et al., 2012). Second, full-collinearity VIFs were generated in SmartPLS; values ranged from 1.445 to 2.362, comfortably beneath the conservative 3.3 cutoff recommended for detecting method bias in PLS-SEM models (Kock, 2015).

5.3 Validity and reliability assessment

Convergent validity and internal consistency were evaluated against four established benchmarks (Table 3). All outer loadings fall between 0.737 and 0.868, comfortably surpassing the 0.70 guideline that signals strong indicator reliability (Hair et al., 2017). Cronbach's alpha ranges from 0.748 (Perceived AI Frustration) to 0.844 (AI Literacy Deficiency and Continuance Intention), while composite reliability coefficients span 0.839–0.895; both indices exceed the 0.70 threshold for acceptable internal consistency. Average variance extracted values vary from 0.567 to 0.731, confirming that each latent variable captures well over half of its indicators' variance and thus meets the criterion for convergent validity (Hair et al., 2017).

Table 3
www.frontiersin.org

Table 3. Convergent validity and reliability.

Discriminant validity was corroborated via two complementary tests (Table 4). For every construct, the square root of its AVE (bold diagonal) is larger than the associated inter construct correlations, satisfying the Fornell–Larcker criterion (Fornell and Larcker, 1981). In addition, all heterotrait–monotrait ratios remain below the conservative 0.90 ceiling— the highest observed value is 0.899— thereby providing further evidence that each construct is empirically distinct (Henseler et al., 2015). Cross-loading matrix shown in Table 5 and indicate that each of original constructs outer loadings are higher compare to other items loadings.

Table 4
www.frontiersin.org

Table 4. Fornell-larcker criterion and HTMT.

Table 5
www.frontiersin.org

Table 5. Cross-loadings matrix and VIF.

5.4 Hypothesis testing

The analysis begins with the two upstream stressors. Summary of hypothesis testing shown in Table 6. H4 posits that response failure heightens perceived AI frustration. This path is large and significant in the pooled data (β = 0.520, t = 14.990, f2 = 0.517), confirming that system inaccuracies are a primary emotional trigger. The effect is stronger for Indonesian users (β = 0.579) than for their Taiwanese counterparts (β = 0.318), implying lower tolerance for technical lapses in a less digitally mature setting. Complementing the system view, H5 predicts that AI literacy deficiency raises frustration. The relationship is again significant overall (β = 0.371, t = 11.246, f2 = 0.264) but attenuates in Taiwan (β = 0.234) relative to Indonesia (β = 0.365). Taken together, these results validate the transactional stress perspective: both external demands (erroneous output) and internal resource gaps (limited skills) converge to generate affective strain, with magnitude moderated by contextual infrastructure.

Table 6
www.frontiersin.org

Table 6. Summary of hypothesis testing.

Turning to the first coping pivot, H2 asserts that frustration stimulates resistance to change. The pooled coefficient is strong (β = 0.620, t = 24.156, f2 = 0.626). Country specific analyses reveal a striking asymmetry: Indonesians translate frustration into inertia more powerfully (β = 0.692, f2 = 0.919) than Taiwanese users (β = 0.337, f2 = 0.144). This pattern suggests that when institutional support and alternative tools are scarce, negative affect reinforces a preference for familiar routines rather than experimentation. The finding aligns with prior change management research showing that perceived environmental constraints intensify defensive coping responses.

Behavioral outcomes are addressed in H1 and H3. Contrary to conventional technology acceptance logic, frustration exerts a positive, medium sized impact on continuance intention (β = 0.396, t = 10.058, f2 = 0.220). The effect is modest in Indonesia (β = 0.183) but more pronounced in Taiwan (β = 0.327), indicating that users embedded in richer digital ecosystems transform irritation into productive re engagement. Resistance to change shows an even larger influence on continuance (pooled β = 0.435, t = 10.507, f2 = 0.264) and dominates the decision calculus in Indonesia (β = 0.726) vs. Taiwan (β = 0.269). These findings underscore two distinct persistence routes: a frustration driven, exploration path in Taiwan and a habit driven, inertia path in Indonesia.

Finally, H6 evaluates the mediating role of resistance. The indirect effect of frustration on continuance through resistance is significant overall (β = 0.270, t = 9.251) and markedly stronger in Indonesia (β = 0.502) than in Taiwan (β = 0.091). Variance inflation factors (1.000–1.919) lie comfortably below the 3.3 threshold, eliminating multicollinearity concerns, and all paths retain significance at p < 0.001. The results substantiate the integrated stress and dissonance framework: system failures and skill deficits ignite frustration; frustration channels into either direct recommitment or resistance; and these coping routes—shaped by national context—ultimately sustain, rather than diminish, generative AI engagement.

5.5 Model robustness testing

The explanatory adequacy of the structural model was first evaluated with R2 statistics. SmartPLS output shows that the exogenous block composed of Response Failure and AI Literacy Deficiency explains 54.8% of the variance in Perceived AI Frustration (R2 = 0.548), while Frustration accounts for 38.5% of the variance in Resistance to Change (R2 = 0.385). Together, Frustration and Resistance predict 50.2% of Continuance Intention (R2 = 0.502). All three coefficients exceed the 0.10 threshold recommended by Falk and Miller (1992), indicating that the model captures a non-trivial share of behavioral variance in both affective and conative outcomes.

Global model quality was gauged with the GoF index proposed by Tenenhaus et al. (2005). Using Equation 1, the square root product of the mean AVE across constructs (0.676) and the mean R2 of endogenous variables (0.479) yields a GoF of 0.568, situating the model in the “large” fit band (>0.36) advocated by Wetzels et al. (2009). In addition, the SRMR value of 0.058 is lower than the 0.08 ceiling for well-fitting models, further affirming overall adequacy (Hu and Bentler, 1999).

GoF=((AVE))×((R2))           =(0.676 ×0.502)           =0.583    (1)

Predictive validity was assessed out of sample with PLSpredict, following the guidelines of Shmueli et al. (2019). All Q2 predict statistics are positive and exceed 0.15 (Perceived Frustration = 0.274; Resistance = 0.223; Continuance = 0.267), indicating medium predictive relevance for unseen cases shown in Table 7.

Table 7
www.frontiersin.org

Table 7. PLS-predict latent variable.

6 Discussion

The study set out to clarify why academics in two culturally distinct settings keep turning to ChatGPT even when it repeatedly frustrates them, and whether the dual lenses of cognitive dissonance and stress coping could account for this paradox. The structural results indicate that all hypothesized paths are significant in the aggregate sample and in each country sub group, confirming that the proposed mechanism—response failures and literacy gaps → frustration → resistance and/or recommitment → continuance—operates across contexts. Explained variance exceeds 50 % for continuance intention, surpassing the “substantial” benchmark for behavioral research in information systems (Hair et al., 2017). Equally important, the model demonstrates strong predictive validity in PLSpredict hold outs, meaning its logic generalizes to unseen data. These diagnostics show that the study's objectives have been achieved: it disentangles human and system stressors, pinpoints frustration as the emotional fulcrum, and reveals resistance to change as a culturally contingent coping channel that reconciles dissonance without derailing use. The findings thus extend earlier continuance models that centered mostly on utilitarian or hedonic benefits (Kim and Baek, 2025) by foregrounding the emotional and motivational work that sustains adoption amid disappointment.

A first salient insight is the magnitude with which response failures ignite frustration—an effect that dwarfs the impact of literacy gaps in both countries but is especially strong in Indonesia. This aligns with controlled experiments showing that visible algorithmic errors trigger sharper drops in trust than latent design flaws (Dietvorst et al., 2015). Yet the literature is divided: some scholars argue small errors are tolerated when systems deliver net gains (Logg et al., 2019), while others report immediate abandonment (Cao et al., 2024). Our data reconcile these positions by showing that visible inaccuracies do spark frustration, but the downstream response—quitting vs. persisting—depends on local cost–benefit calculations. For Indonesian faculty, limited alternatives and institutional licensing encourage tolerance; for Taiwanese students swimming in digital options, each error is more easily reframed as an impetus to experiment. This refine advances algorithm aversion debates by demonstrating that tolerance thresholds are not technology-intrinsic but socially constructed around switching barriers and ecosystem maturity.

Equally noteworthy is the finding that AI literacy deficiency significantly amplifies frustration, corroborating technostress research linking skill gaps to negative affect (Ayyagari et al., 2011). However, the smaller coefficient in Taiwan suggests that baseline digital capital attenuates emotional strain, echoing evidence that self-efficacy buffers stress appraisals during IT rollouts (Beaudry and Pinsonneault, 2005). The implication is double edged. On the pro side, targeted literacy interventions could rapidly reduce frustration in emerging markets; on the con side, relying solely on skill building may not suffice where systemic hallucinations remain frequent. Hence, human center and system improvement strategies must proceed in tandem. This conclusion contrasts with studies that privilege either user training (Long and Magerko, 2020) or algorithmic optimization (Benbasat and Wang, 2005) as silver bullets, arguing instead for integrated policies that tackle both ends of the stress appraisal.

Turning to coping responses, the robust positive path from frustration to resistance to change indicates that negative emotions can form into inertia rather than disengagement, particularly in Indonesia. This pattern resonates with organizational studies showing that uncertainty triggers defensive routines aimed at preserving familiar workflows (Lapointe and Rivard, 2005). Yet it challenges adoption frameworks that routinely treat resistance as a barrier to be minimized (Jasperson et al., 2005). In the generative AI realm, resistance appears to serve an adaptive role: by lowering willingness to experiment with alternatives, it allows users to preserve perceived value while containing cognitive dissonance. Such a protective mechanism mirrors findings in service failure literature where customers discount minor lapses to avoid search costs (Choi et al., 2018). The downside is that entrenched resistance may stifle critical reflection and limit the uptake of genuinely superior updates, suggesting a need for institutional nudges that encourage periodic reassessment without forcing disruptive switches.

The most counter-intuitive discovery concerns the direct positive link between frustration and continuance intention, strongest in Taiwan. Earlier work on social-media fatigue documents similar “stickiness,” explaining it through fear of missing out and ingrained habit (Ravindran et al., 2014). Our findings extend this insight to knowledge-work AI: when users perceive high instrumental benefits, frustration becomes a challenge trigger rather than a quitting cue. In Taiwan's digitally saturated environment, participants appear to convert irritation into active problem solving, echoing mastery-oriented stress interpretations (Crum et al., 2017). This suggests a pro: frustration can fuel skill development and system feedback loops. The con is cognitive overload; repeated cycles of effortful verification may exhaust attention and erode deep learning, paralleling concerns about surface-level engagement raised in experimental studies (Lundin et al., 2023).

Resistance's mediation of the frustration–continuance tie further clarifies cultural divergence. In Indonesia, the indirect path is dominant, indicating that users cope affectively by entrenching routines. This supports theorizing that high switching costs and collective endorsement strengthen status quo bias (Samuelson and Zeckhauser, 1988). Conversely, Taiwanese users rely more on the direct route, illustrating that in agile ecosystems habits matter less than perceived challenge and mastery opportunities. Such divergence underscores that coping trajectory is context sensitive, cautioning against universal prescriptions like “reduce frustration to spur adoption.” Instead, policymakers should diagnose local infrastructure and social norms before selecting levers—be it literacy training, error transparency dashboards, or incentivized experimentation.

7 Implications

7.1 Implication for theory

The first theoretical contribution of this study lies in demonstrating how CDT and the TMSC/can be fruitfully combined to explain post adoption paradoxes in generative AI use. Earlier continuance work has foregrounded utilitarian or hedonic benefits (Kim and Baek, 2025; Wolf and Maier, 2024), whereas dissonance studies have rarely traced stress appraisals over time. By showing that system-origin demands (response failures) and human origin deficits (AI literacy gaps) elevate frustration, which then triggers simultaneous dissonance reduction and coping appraisals, our model clarifies the emotional mechanics sustaining engagement even in the face of persistent disappointment. This dual path perspective extends CDT research that treated rationalization largely as a cognitive process divorced from stress (Stone and Taylor, 2021), and it enriches TMSC studies that seldom incorporate the attitude repair logic embedded in dissonance reduction. Empirically, the finding that frustration can propel both resistance and renewed commitment situates negative affect as a pivot rather than a pure deterrent, urging theorists to model emotions and cognitions as intertwined rather than sequential. Consequently, the study advances a more integrative framework for understanding how users reconcile instrumental benefits with psychological discomfort in AI mediated work.

A second theoretical advance concerns the re conceptualization of resistance to change. Prior information systems research typically frames resistance as a barrier that firms must minimize during roll outs (Lapointe and Rivard, 2005) or as an antecedent of discontinuance. Our data reveal that, under conditions of high perceived utility and scarce substitutes, resistance acts as an adaptive buffer: it mediates the frustration–continuance link, allowing users to protect workflow stability while downgrading expectations about perfect accuracy. This finding corroborates service recovery evidence that customers discount minor lapses to avoid switching costs (Choi et al., 2021) and extends (Marikyan et al.'s, 2023) smart home study by locating the mechanism explicitly in a generative AI context. Theoretically, positioning resistance as both outcome and conduit reframes status quo bias (Samuelson and Zeckhauser, 1988) as a dynamic coping resource rather than a static trait. It also explains why negative emotions do not necessarily erode continuance intention, thereby challenging linear models that posit a direct, negative frustration usage relationship (Mun and Hwang, 2025). Future continuance theories should therefore treat resistance not only as an obstacle but as a diagnostic marker signalling unresolved cognitive conflict that nevertheless supports short term persistence.

Finally, the multi group results contribute to cross cultural technology theory by showing that identical stressors generate divergent coping trajectories depending on digital ecosystem maturity. Taiwanese users, operating in a technologically saturated environment, channel frustration directly into exploratory recommitment, echoing mastery oriented stress interpretations (Crum et al., 2017). In contrast, Indonesian users, constrained by higher switching costs and institutional endorsements, route frustration through resistance, mirroring defensive routines observed in contexts of limited alternative support (Tarafdar et al., 2019). This asymmetry advances algorithm aversion literature, which has debated whether visible AI errors universally diminish trust (Dietvorst et al., 2015) or are tolerated when benefits remain high (Logg et al., 2019), by demonstrating that tolerance thresholds are socially constructed around digital capital and organizational backing. Moreover, the findings deepen global continuance models by integrating cultural moderators into emotional coping pathways, an aspect largely overlooked in prior elite journal studies of ChatGPT adoption (Zhong et al., 2024).

7.2 Implication for practice

Academic users in both countries keep ChatGPT in their workflow because the time it saves outweighs the aggravation of occasional hallucinations, yet the study shows that every unverifiable claim drivers frustration and—if left unchecked—hardens into resistance. Universities can convert this risk into a learning asset by embedding an “accuracy spine” that captures errors, circulates them for collective diagnosis, and feeds them back into teaching. A small plug-in placed beside the chat window lets any user flag dubious output with one click; graduate assistants curate these flags each afternoon, classify the underlying failure mode, and post the cases to a searchable dashboard sorted by discipline. Lecturers draw weekly from this dashboard to demonstrate verification moves in class, linking critical-reading skills to immediate coursework rather than adding extra workload. Similar human-in-the-loop feedback loops have improved data quality perceptions in earlier information-system deployments (Fu et al., 2024; Silalahi, 2025) and, in the context of this study, would keep dissonance within tolerable limits, especially for Indonesian users who reacted more strongly to repeated errors. By normalizing error reporting and making remediation visible, the institution signals that frustration is a shared problem, thereby pre-empting the inertia that the research identifies as a key coping response.

Frustration spikes when real world performance falls short of the mental picture users hold of artificial intelligence. Orientation sessions must therefore move beyond glossy demos and incorporate candid “myth vs. fact” walkthroughs. Facilitators present rapid fire pairs of brilliant and flawed outputs, then guide participants in tracing each failure back to its root—vague prompt, outdated training cut off, domain bias. The group records a corrective tactic next to every flaw, turning expectation management into a participatory exercise rather than a top down lecture. This approach borrows from service recovery pedagogy that emphasizes shared sense making (Marikyan et al., 2022) and directly addresses the expectation gap that inflated frustration in this study. Each cautionary example is matched with a showcase of genuine benefit—such as a near instant literature map or a multilingual summary—so enthusiasm is tempered but not extinguished. Participants leave with a written pledge describing how they will authenticate high stakes answers; instructors revisit this pledge midway through term, reinforcing a culture of accountable use. Balanced onboarding such as this has been shown to cushion negative affect by tying perceived usefulness to realistic boundaries, aligning with the study's finding that irritation does not automatically lead to abandonment when instrumental value remains salient.

One off workshops rarely close competence deficits, and the present findings confirm that chronic literacy gaps are a reliable generator of frustration. A more sustainable alternative is to weave micro habits into regular coursework. Every week, students complete a 5 min “prompt makeover”: they rewrite an ineffective query, compare outputs, and upload screenshots to a shared gallery. Over the semester the gallery reveals prompt patterns across subjects, creating a peer driven reference bank while normalizing trial and error. A second habit—the “double source rule”—requires students to paste two independent confirmations alongside any chatbot derived fact they use in an assignment. This rule instills the watchdog mindset that Long and Magerko (2020) identify as core to AI literacy and directly combats the helplessness that fed inertia among Indonesian participants. Short reflective check ins ask, “What frustrated me this week and how did I fix it?” turning emotion focused coping into a conscious routine, as advocated by Lazarus (1984). Because each exercise is tiny, lecturers can embed them without displacing disciplinary content, yet the cumulative effect is a progressive reduction in error triggered annoyance and a parallel boost in self-efficacy—two levers the study links to healthier, more exploratory persistence.

Developers often pour effort into squeezing marginal gains in accuracy, but the evidence here suggests that affective transparency may calm users more effectively than another decimal point of precision. First, a traffic light confidence indicator—green for cross validated, yellow for unverified, gray for speculative—helps users allocate checking effort and has restored trust in earlier algorithmic advice studies (Dietvorst et al., 2015). Second, a side by side answer comparison with an alternative open source model encourages critical reading when the two outputs diverge and reinforces confidence when they converge. Third, the interface can watch for repeated edits to a passage and surface a brief, discipline specific prompt tip, delivering just in time training to users with evident literacy gaps. Finally, a micro incentive scheme grants extra peak time priority to users who submit verified error reports, harnessing frustration for model improvement and echoing crowd sourced quality programmes that have sharpened search relevance in past information retrieval research (Benbasat and Wang, 2005). By aligning design choices with the coping mechanisms surfaced in this study—direct recommitment in digital mature settings and inertia buffering in resource scarce ones—vendors move from a one size fits all interface to a partnership that modulates emotion as well as information.

The contrast between Indonesia's habit anchored persistence and Taiwan's exploration oriented coping reiterates that digital context shapes emotional trajectories. Ministries in resource constrained environments should mandate an “AI accuracy ledger” for each public university: a live repository cataloging frequent error categories, average verification time, and corrective steps. Publishing this ledger exerts reputational pressure that discourages complacent tolerance of low grade performance and guides targeted grants toward language specific fine tuning or staff development. In digitally saturated ecosystems the policy lever looks different: regulators can require universities to maintain a rotation zone where at least two competing large language models are trialed each semester under strict privacy safeguards. This prevents monopoly lock in and keeps the exploration channel alive, aligning with the direct persistence route identified among Taiwanese users. These differentiated nudges improve not only technical performance but also the emotional climate that surrounds AI work, reducing the probability that frustration congeals into passive cynicism or uncritical dependence.

Library and research-office personnel are now frontline advisers on AI use, yet the study warns that if these support units do not intervene strategically, staff and students will simply normalize a cycle of copying, checking and quietly fuming. A practical remedy is to fold prompt engineering and verification standards into existing research-integrity checkpoints. When a graduate student submits a draft, the writing center could request the underlying prompts and chat extracts as supplementary material, mirroring how raw data or statistical code are already archived. Consultants then focus the tutorial on identifying where hallucinations crept in and how alternative prompts or model comparisons might have pre-empted them. This not only elevates document quality but also externalizes the coping strategies that more skilled users apply instinctively, accelerating peer learning. Journals linked to the institution can reinforce the loop by asking authors to sign a disclosure stating whether AI-generated text was used and how it was validated—echoing new policies in several international publishers. By embedding these procedural guard-rails into the scholarly communication lifecycle, universities transform individual frustration into a collective protocol for responsible innovation. In line with the study's findings, such visible standards prevent inertia from mutating into silent reputational risk, while still preserving the productivity edge that makes ChatGPT attractive in the first place.

8 Conclusion, limitation and future research avenues

This study unpacks the paradox that university users continue turning to ChatGPT even when it repeatedly disappoints them. By merging cognitive dissonance theory with the transactional model of stress and coping, we show that two seemingly mundane antecedents—unreliable responses from the system and gaps in users' AI literacy—ignite frustration. Rather than automatically driving abandonment, that frustration forks into two culturally distinct routes. In the more resource constrained Indonesian context it forms into resistance to change, which, in turn, anchors the tool firmly in existing routines. In digitally saturated Taiwan it sparks renewed experimentation that keeps the chatbot in play while users hunt for better prompts or complementary applications. Across both routes, generative AI persists not because it is flawless but because users learn to fold irritation into their workflow, balancing emotional discomfort against tangible productivity gains. The model therefore reframes frustration as a pivot that sustains, rather than erodes, engagement and highlights the situational nature of coping.

The findings must be interpreted against several boundaries. First, the study relies on self-reported perceptions captured at a single point in time; it cannot trace how emotions and coping evolve as users accumulate experience or as the platform's capabilities shift. Second, all participants were drawn from higher-education settings where scholarly norms and institutional licenses shape both the costs of switching and the desire for accuracy; results may look different in corporate, public-sector, or informal learning environments. Third, the focal variables were confined to emotional and cognitive reactions; behavioral data such as log files, prompt histories, or documented verification practices were not captured, leaving open the gap between stated intention and actual use. Finally, cultural comparison was limited to two economies in East and Southeast Asia, each with its own language and policy landscape, so generalization to other regions must proceed with caution.

Scholars can extend this work in at least three directions. Longitudinal or diary designs would clarify whether users eventually abandon maladaptive coping, refine their prompts into problem focused routines, or shift to rival platforms as opportunity costs change. Experimental studies could manipulate transparency features—confidence indicators, citation links, or side by side model comparisons—to test whether interface tweaks dampen frustration before it calcifies into inertia. Broader comparative projects should map coping trajectories across additional cultures, professional roles, and generative AI tools, incorporating objective usage metrics and performance audits to triangulate self-report data. Finally, intervention research is needed: targeted literacy programmes, peer feedback dashboards, and institutional error tracking systems could be deployed and evaluated to discover which combinations most effectively convert “irritated but engaged” users into confident, critically minded partners of artificial intelligence.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

Ethical approval was not required for the studies involving humans. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

AS: Conceptualization, Methodology, Resources, Validation, Writing – original draft, Writing – review & editing. NA: Conceptualization, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing. NK: Investigation, Methodology, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. ADKS: Conceptualization, Formal analysis, Investigation, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Archibald, M. M., and Clark, A. M. (2023). ChatGTP: what is it and how can nursing and health science education use it?. J. Adv. Nurs. 79, 3648–3651. doi: 10.1111/jan.15643

PubMed Abstract | Crossref Full Text | Google Scholar

Ayyagari, R., Grover, V., and Purvis, R. (2011). Technostress: technological antecedents and implications. MIS Q. 35, 831–858. doi: 10.2307/41409963

Crossref Full Text | Google Scholar

Baumgartner, H., Weijters, B., and Pieters, R. (2021). The biasing effect of common method variance: some clarifications. J. Acad. Market. Sci. 49, 221–235. doi: 10.1007/s11747-020-00766-8

Crossref Full Text | Google Scholar

Beaudry, A., and Pinsonneault, A. (2005). Understanding user responses to information technology: A coping model of user adaptation. MIS Quarterly. 493–524. doi: 10.2307/25148693

Crossref Full Text | Google Scholar

Beaudry, A., and Pinsonneault, A. (2010). The other side of acceptance: studying the direct and indirect effects of emotions on information technology use. MIS Q. 34, 689–710. doi: 10.2307/25750701

Crossref Full Text | Google Scholar

Benbasat, I., and Wang, W. (2005). Trust in and adoption of online recommendation agents. J. Assoc. Inf. Syst. 6:4. doi: 10.17705/1jais.00065

Crossref Full Text | Google Scholar

Benbya, H., Pachidi, S., and Jarvenpaa, S. (2021). Special issue editorial: artificial intelligence in organizations: implications for information systems research. J. Assoc. Inf. Syst. 22:10. doi: 10.17705/1jais.00662

Crossref Full Text | Google Scholar

Bhattacherjee, A. (2001). Understanding information systems continuance: an expectation-confirmation model. MIS Q. 25, 351–370. doi: 10.2307/3250921

Crossref Full Text | Google Scholar

Bhattacherjee, A., and Barfar, A. (2011). Information technology continuance research: current state and future directions. Asia Pacific J. Inf. Syst. 21, 1–18.

Google Scholar

Bhattacherjee, A., and Hikmet, N. (2007). Physicians' resistance toward healthcare information technology: a theoretical model and empirical test. Eur. J. Inf. Syst. 16, 725–737. doi: 10.1057/palgrave.ejis.3000717

Crossref Full Text | Google Scholar

Califf, C. B., Sarker, S., and Sarker, S. (2020). The bright and dark sides of technostress: a mixed-methods study involving healthcare IT. MIS Q. 44, 809–856. doi: 10.25300/MISQ/2020/14818

Crossref Full Text | Google Scholar

Camilleri, M. A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: using SmartPLS to advance an information technology acceptance framework. Technol. Forecast. Soc. Change 201:123247. doi: 10.1016/j.techfore.2024.123247

Crossref Full Text | Google Scholar

Cao, Z., Li, M., and Pavlou, P. A. (2024). AI in business research. Decis. Sci. 55, 518–532. doi: 10.1111/deci.12655

Crossref Full Text | Google Scholar

Carter, M., Petter, S., Grover, V., and Thatcher, J. B. (2020). IT identity: a measure and empirical investigation of its utility to IS research. J. Assoc. Inf. Syst. 21:2. doi: 10.17705/1jais.00638

Crossref Full Text | Google Scholar

Carver, C. S., and Connor-Smith, J. (2010). Personality and coping. Annu. Rev. Psychol. 61, 679–704. doi: 10.1146/annurev.psych.093008.100352

Crossref Full Text | Google Scholar

Cheikh-Ammar, M. (2020). The bittersweet escape to information technology: an investigation of the stress paradox of social network sites. Inf. Manag. 57:103368. doi: 10.1016/j.im.2020.103368

Crossref Full Text | Google Scholar

Chen, A., Xiang, M., Zhou, J., Jia, J., Shang, J., Li, X., et al. (2025). Unpacking help-seeking process through multimodal learning analytics: a comparative study of ChatGPT vs Human expert. Comput. Educ. 226:105198. doi: 10.1016/j.compedu.2024.105198

Crossref Full Text | Google Scholar

Chiang, M., and Chen, P. (2022). Education for sustainable development in the business programme to develop international Chinese college students' sustainability in Thailand. J. Clean. Prod. 374:134045. doi: 10.1016/j.jclepro.2022.134045

Crossref Full Text | Google Scholar

Choi, C., Mattila, A. S., and Upneja, A. (2018). The effect of assortment pricing on choice and satisfaction: the moderating role of consumer characteristics. Cornell Hosp. Q. 59, 6–14. doi: 10.1177/1938965517730315

Crossref Full Text | Google Scholar

Choi, S., Mattila, A. S., and Bolton, L. E. (2021). To err is human (-oid): how do consumers react to robot service failure and recovery?. J. Serv. Res. 24, 354–371. doi: 10.1177/1094670520978798

Crossref Full Text | Google Scholar

Cooper, R. G. (2019). The drivers of success in new-product development. Ind. Market. Manag. 76, 36–47. doi: 10.1016/j.indmarman.2018.07.005

Crossref Full Text | Google Scholar

Crum, A. J., Akinola, M., Martin, A., and Fath, S. (2017). The role of stress mindset in shaping cognitive, emotional, and physiological responses to challenging and threatening stress. Anxiety Stress Coping 30, 379–395. doi: 10.1080/10615806.2016.1275585

PubMed Abstract | Crossref Full Text | Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–340. doi: 10.2307/249008

Crossref Full Text | Google Scholar

Dietvorst, B. J., Simmons, J. P., and Massey, C. (2015). Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. 144:114. doi: 10.1037/xge0000033

PubMed Abstract | Crossref Full Text | Google Scholar

Dietvorst, B. J., Simmons, J. P., and Massey, C. (2018). Overcoming algorithm aversion: people will use imperfect algorithms if they can (even slightly) modify them. Manage. Sci. 64, 1155–1170. doi: 10.1287/mnsc.2016.2643

Crossref Full Text | Google Scholar

Dinev, T., Goo, J., Hu, Q., and Nam, K. (2009). User behaviour towards protective information technologies: the role of national cultural differences. Inf. Syst. J. 19, 391–412. doi: 10.1111/j.1365-2575.2007.00289.x

Crossref Full Text | Google Scholar

Falk, R. F., and Miller, N. B. (1992). A Primer for Soft Modeling. Akron, OH: University of Akron Press.

Google Scholar

Festinger, L. (1957). Social comparison theory. Selective Expo. Theor. 16:3.

Google Scholar

Folkman, S., and Moskowitz, J. T. (2004). Coping: pitfalls and promise. Annu. Rev. Psychol. 55, 745–774. doi: 10.1146/annurev.psych.55.090902.141456

PubMed Abstract | Crossref Full Text | Google Scholar

Ford, J. D., and Ford, L. W. (2009). Resistance to change: a reexamination and extension. Res. Organ. Change Dev. 17, 211–239. doi: 10.1108/S0897-3016(2009)0000017008

Crossref Full Text | Google Scholar

Fornell, C., and Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. J. Market. Res. 18, 39–50. doi: 10.1177/002224378101800104

Crossref Full Text | Google Scholar

Fu, C. J., Silalahi, A. D. K., Shih, I. T., Phuong, D. T. T., Eunike, I. J., and Jargalsaikhan, S. (2024). Assessing ChatGPT's information quality through the lens of user information satisfaction and information quality theory in higher education: a theoretical framework. Hum. Behav. Emerg. Technol. 2024:8114315. doi: 10.1155/2024/8114315

Crossref Full Text | Google Scholar

Gao, Y., Wang, Q., and Wang, X. (2024). Exploring EFL university teachers' beliefs in integrating ChatGPT and other large language models in language education: A study in China. Asia Pac. J. Educ. 44, 29–44. doi: 10.1080/02188791.2024.2305173

Crossref Full Text | Google Scholar

Gawronski, B., and Brannon, S. M. (2020). Power and moral dilemma judgments: distinct effects of memory recall versus social roles. J. Exp. Soc. Psychol. 86:103908. doi: 10.1016/j.jesp.2019.103908

Crossref Full Text | Google Scholar

Ghio, A. (2024). Democratizing academic research with Artificial Intelligence: the misleading case of language. Crit. Perspect. Account. 98:102687. doi: 10.1016/j.cpa.2023.102687

Crossref Full Text | Google Scholar

Gross, J. J. (2015). Emotion regulation: current status and future prospects. Psychol. Inq. 26, 1–26. doi: 10.1080/1047840X.2014.940781

Crossref Full Text | Google Scholar

Hair, J., Hollingsworth, C. L., Randolph, A. B., and Chong, A. Y. L. (2017). An updated and expanded assessment of PLS-SEM in information systems research. Ind. Manag. Data Syst. 117, 442–458. doi: 10.1108/IMDS-04-2016-0130

Crossref Full Text | Google Scholar

Hair, J. F., Astrachan, C. B., Moisescu, O. I., Radomir, L., Sarstedt, M., Vaithilingam, S., et al. (2021). Executing and interpreting applications of PLS-SEM: Updates for family business researchers. J. Fam. Bus. Strateg. 12:100392. doi: 10.1016/j.jfbs.2020.100392

Crossref Full Text | Google Scholar

Harmon-Jones, E., and Harmon-Jones, C. (2007). Cognitive dissonance theory after 50 years of development. Z. Sozialpsychol. 38, 7–16. doi: 10.1024/0044-3514.38.1.7

Crossref Full Text | Google Scholar

Harmon-Jones, E., Harmon-Jones, C., and Levy, N. (2015). An action-based model of cognitive-dissonance processes. Curr. Dir. Psychol. Sci. 24, 184–189. doi: 10.1177/0963721414566449

Crossref Full Text | Google Scholar

Henseler, J., Ringle, C. M., and Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Market. Sci. 3, 115–135. doi: 10.1007/s11747-014-0403-8

Crossref Full Text | Google Scholar

Hsu, W. L., and Silalahi, A. D. K. (2024). Exploring the paradoxical use of ChatGPT in education: analyzing benefits, risks, and coping strategies through integrated UTAUT and PMT theories using a hybrid approach of SEM and fsQCA. Comput. Educ. Artif. Intell. 7:100329. doi: 10.1016/j.caeai.2024.100329

Crossref Full Text | Google Scholar

Hu, L. T., and Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. 6, 1–55. doi: 10.1080/10705519909540118

Crossref Full Text | Google Scholar

Issa, M., Faraj, M., and AbiGhannam, N. (2024). Exploring ChatGPT's ability to classify the structure of literature reviews in engineering research articles. IEEE Trans. Learn. Technol. 17, 1819–1828. doi: 10.1109/TLT.2024.3409514

Crossref Full Text | Google Scholar

Izuma, K., and Murayama, K. (2019). “Neural basis of cognitive dissonance,” in Cognitive Dissonance: Reexamining a Pivotal Theory in Psychology, ed. E. Harmon-Jones, 2nd Edn. (Washington, DC: American Psychological Association), 227–245.

Google Scholar

Jasperson, J., Carter, P. E., and Zmud, R. W. (2005). A comprehensive conceptualization of post-adoptive behaviors associated with information technology enabled work systems. MIS Q. 29, 525–557. doi: 10.2307/25148694

Crossref Full Text | Google Scholar

Jones-Jang, S. M., and Park, Y. J. (2023). How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability. J. Comput. Mediat. Commun. 28:zmac029. doi: 10.1093/jcmc/zmac029

Crossref Full Text | Google Scholar

Kim, H. W., and Kankanhalli, A. (2009). Investigating user resistance to information systems implementation: a status quo bias perspective. MIS Q. 33, 567–582. doi: 10.2307/20650309

Crossref Full Text | Google Scholar

Kim, J. S., and Baek, T. H. (2025). Motivational determinants of continuance usage intention for generative AI: an investment model approach for ChatGPT users in the United States. Behav. Inf. Technol. 44, 3080–3096. doi: 10.1080/0144929X.2024.2429647

Crossref Full Text | Google Scholar

Kiryakova, G., and Angelova, N. (2023). ChatGPT—A challenging tool for the university professors in their teaching practice. Educ. Sci. 13:1056. doi: 10.3390/educsci13101056

Crossref Full Text | Google Scholar

Klein, S. B., Cosmides, L., Tooby, J., and Chance, S. (2002). Decisions and the evolution of memory: multiple systems, multiple functions. Psychol. Rev. 109:306. doi: 10.1037/0033-295X.109.2.306

PubMed Abstract | Crossref Full Text | Google Scholar

Kock, N. (2015). Common method bias in PLS-SEM: a full collinearity assessment approach. Int. J. e-Collab. 11, 1–10. doi: 10.4018/ijec.2015100101

Crossref Full Text | Google Scholar

Komiak, S. Y., and Benbasat, I. (2006). The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Q. 30, 941–960. doi: 10.2307/25148760

Crossref Full Text | Google Scholar

Lapointe, L., and Rivard, S. (2005). A multilevel model of resistance to information technology implementation. MIS Q. 461–491. doi: 10.2307/25148692

Crossref Full Text | Google Scholar

Lazar, J., Jones, A., and Shneiderman, B. (2006). Workplace user frustration with computers: an exploratory investigation of the causes and severity. Behav. Inf. Technol. 25, 239–251. doi: 10.1080/01449290500196963

Crossref Full Text | Google Scholar

Lazarus, R. S. (1984). Stress, Appraisal, and Coping, Vol. 464. New York: Springer.

Google Scholar

Li, L., Ma, Z., Fan, L., Lee, S., Yu, H., and Hemphill, L. (2024). ChatGPT in education: A discourse analysis of worries and concerns on social media. Educ. Inf. Technol. 29, 10729–10762. doi: 10.48550/arXiv.2305.02201

Crossref Full Text | Google Scholar

Liang, Y., Zou, D., Xie, H., and Wang, F. L. (2023). Exploring the potential of using ChatGPT in physics education. Smart Learn. Environ. 10:52. doi: 10.1186/s40561-023-00273-7

Crossref Full Text | Google Scholar

Logg, J. M., Minson, J. A., and Moore, D. A. (2019). Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90–103. doi: 10.1016/j.obhdp.2018.12.005

Crossref Full Text | Google Scholar

Long, D., and Magerko, B. (2020). “What is AI literacy? Competencies and design considerations,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (New York: Association for Computing Machinery), 1–16.

Google Scholar

Lundin, J., Modén, M. U., Lindell, T. L., and Fischer, G. (2023). A remedy to the unfair use of AI in educational settings. Interact. Design Archit. 59, 62–78. doi: 10.55612/s-5002-059-002

Crossref Full Text | Google Scholar

Ma, J., Wang, P., Li, B., Wang, T., Pang, X. S., and Wang, D. (2025). Exploring user adoption of ChatGPT: a technology acceptance model perspective. Int. J. Hum. Comput. Interact. 41, 1431–1445. doi: 10.1080/10447318.2024.2314358

Crossref Full Text | Google Scholar

Maier, C., Laumer, S., and Weitzel, T. (2022). A dark side of telework: a social comparison-based study from the perspective of office workers. Bus. Inf. Syst. Eng. 64, 793–811. doi: 10.1007/s12599-022-00758-8

Crossref Full Text | Google Scholar

Marikyan, D., Papagiannidis, S., Rana, O. F., and Ranjan, R. (2022). Blockchain adoption: a study of cognitive factors underpinning decision making. Comput. Human Behav. 131:107207. doi: 10.1016/j.chb.2022.107207

Crossref Full Text | Google Scholar

Marikyan, D., Papagiannidis, S., and Stewart, G. (2023). Technology acceptance research: meta-analysis. J. Inf. Sci. 01655515231191177. doi: 10.1177/01655515231191177

Crossref Full Text | Google Scholar

Mun, I. B., and Hwang, K. H. (2025). Understanding ChatGPT continuous usage intention: the role of information quality, information usefulness, and source trust. Inf. Dev. 41, 675–691. doi: 10.1177/02666669241307595

Crossref Full Text | Google Scholar

Nguyen, L. Q., Le, H. V., and Nguyen, P. T. (2024). A mixed-methods study on the use of chatgpt in the pre-writing stage: EFL learners' utilization patterns, affective engagement, and writing performance. Educ. Inf. Technol. 30, 10511–10534. doi: 10.1007/s10639-024-13231-8

Crossref Full Text | Google Scholar

Oreg, S. (2006). Personality, context, and resistance to organizational change. Eur. J. Work Organ. Psychol. 15, 73–101. doi: 10.1080/13594320500451247

Crossref Full Text | Google Scholar

Oreg, S., Bayazit, M., Vakola, M., Arciniega, L., Armenakis, A., Barkauskiene, R., et al. (2008). Dispositional resistance to change: measurement equivalence and the link to personal values across 17 nations. J. Appl. Psychol. 93:935. doi: 10.1037/0021-9010.93.4.935

PubMed Abstract | Crossref Full Text | Google Scholar

Parker, L., Carter, C., Karakas, A., Loper, A. J., and Sokkar, A. (2024). Graduate instructors navigating the AI frontier: the role of ChatGPT in higher education. Comput. Educ. Open 6:100166. doi: 10.1016/j.caeo.2024.100166

Crossref Full Text | Google Scholar

Patton, M. Q. (2015). Qualitative Research and Evaluation methods, 4th edn. Thousand Oaks, CA: London.

Google Scholar

Podsakoff, P. M., MacKenzie, S. B., and Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annu. Rev. Psychol. 63, 539–569. doi: 10.1146/annurev-psych-120710-100452

PubMed Abstract | Crossref Full Text | Google Scholar

Pullins, E., Tarafdar, M., and Pham, P. (2020). The dark side of sales technologies: how technostress affects sales professionals. J. Organ. Effect. People Perform. 7, 297–320. doi: 10.1108/JOEPP-04-2020-0045

Crossref Full Text | Google Scholar

Pütz, O., and Esposito, E. (2024). Performance without understanding: how ChatGPT relies on humans to repair conversational trouble. Discours. Commun. 18, 859–868. doi: 10.1177/17504813241271492

Crossref Full Text | Google Scholar

Ragu-Nathan, T. S., Tarafdar, M., Ragu-Nathan, B. S., and Tu, Q. (2008). The consequences of technostress for end users in organizations: conceptual development and empirical validation. Inf. Syst. Res. 19, 417–433. doi: 10.1287/isre.1070.0165

Crossref Full Text | Google Scholar

Ravindran, T., Yeow Kuan, A. C., and Hoe Lian, D. G. (2014). Antecedents and effects of social network fatigue. J. Assoc. Inf. Sci. Technol. 65, 2306–2320. doi: 10.1002/asi.23122

Crossref Full Text | Google Scholar

Ryoo, Y., Halfacre, V., Kim, E., and Yoon, H. J. (2025). AI chatbot interventions in combatting marijuana-impaired driving: the role of gender, linguistic style, and hypocrisy induction. Int. J. Advert. 45, 1311–1340. doi: 10.1080/02650487.2025.2458996

Crossref Full Text | Google Scholar

Samuelson, W., and Zeckhauser, R. (1988). Status quo bias in decision making. J. Risk Uncertain. 1, 7–59. doi: 10.1007/BF00055564

Crossref Full Text | Google Scholar

Shin, Y., Kim, S. Y., and Byun, E. Y. (2024). “A study on prompt types for harmlessness assessment of large-scale language models,” in International Conference on Human-Computer Interaction, eds. C. Stephanidis, M. Antona, S. Ntoa, and G. Salvendy (Cham: Springer Nature Switzerland), 228–233.

Google Scholar

Shmueli, G., Sarstedt, M., Hair, J. F., Cheah, J. H., Ting, H., Vaithilingam, S., et al. (2019). Predictive model assessment in PLS-SEM: guidelines for using PLSpredict. Eur. J. Mark. 53, 2322–2347. doi: 10.1108/EJM-02-2019-0189

Crossref Full Text | Google Scholar

Shneiderman, B. (2020). Human-centered artificial intelligence: reliable, safe and trustworthy. Int. J. Hum. Comput. Interact. 36, 495–504. doi: 10.1080/10447318.2020.1741118

Crossref Full Text | Google Scholar

Silalahi, A., and Demirci, S. (2025). The Paradox of Frustration and Anger in Driving Users' Continuance Intention Toward Generative Ai. Available at SSRN 5231025.

Google Scholar

Silalahi, A. D. K. (2025). Can generative artificial intelligence drive sustainable behavior? A consumer-adoption model for AI-driven sustainability recommendations. Technol. Soc. 83:102995. doi: 10.1016/j.techsoc.2025.102995

Crossref Full Text | Google Scholar

Skinner, E. A., Edge, K., Altman, J., and Sherwood, H. (2003). Searching for the structure of coping: a review and critique of category systems for classifying ways of coping. Psychol. Bull. 129:216. doi: 10.1037/0033-2909.129.2.216

PubMed Abstract | Crossref Full Text | Google Scholar

Smith, C. A., and Lazarus, R. S. (1993). Appraisal components, core relational themes, and the emotions. Cogn. Emot. 7, 233–269. doi: 10.1080/02699939308409189

Crossref Full Text | Google Scholar

Song, C., and Song, Y. (2023). Enhancing academic writing skills and motivation: assessing the efficacy of ChatGPT in AI-assisted language learning for EFL students. Front. Psychol. 14:1260843. doi: 10.3389/fpsyg.2023.1260843

PubMed Abstract | Crossref Full Text | Google Scholar

Stein, D. J., Costa, D. L., Lochner, C., Miguel, E. C., Reddy, Y. J., Shavitt, R. G., et al. (2019). Obsessive–compulsive disorder. Nat. Rev. Dis. Primers 5:52. doi: 10.1038/s41572-019-0102-3

Crossref Full Text | Google Scholar

Stone, J., and Taylor, J. J. (2021). “Dissonance and attitude change,” in Oxford Research Encyclopedia of Psychology. Available online at: https://oxfordre.com/psychology/view/10.1093/acrefore/9780190236557.001.0001/acrefore-9780190236557-e-296 (Retrieved November 3, 2025).

Google Scholar

Sun, J. (2014). How risky are services? An empirical investigation on the antecedents and consequences of perceived risk for hotel service. Int. J. Hosp. Manag. 37, 171–179. doi: 10.1016/j.ijhm.2013.11.008

Crossref Full Text | Google Scholar

Sundar, S. S., and Liao, M. (2023). Calling BS on ChatGPT: reflections on AI as a communication source. J. Commun. Monogr. 25, 165–180. doi: 10.1177/15226379231167135

Crossref Full Text | Google Scholar

Tarafdar, M., Cooper, C. L., and Stich, J. F. (2019). The technostress trifecta-techno eustress, techno distress and design: Theoretical directions and an agenda for research. J. Inf. Syst. 29, 6–42. doi: 10.1111/isj.12169

Crossref Full Text | Google Scholar

Tarafdar, M., Maier, C., Laumer, S., and Weitzel, T. (2020). Explaining the link between technostress and technology addiction for social networking sites: a study of distraction as a coping behavior. Inf. Syst. J. 30, 96–124. doi: 10.1111/isj.12253

Crossref Full Text | Google Scholar

Tarafdar, M., Stich, J. F., Maier, C., and Laumer, S. (2024). Techno-eustress creators: conceptualization and empirical validation. Inf. Syst. J. 34, 2097–2131. doi: 10.1111/isj.12515

Crossref Full Text | Google Scholar

Tenenhaus, M., Vinzi, V. E., Chatelin, Y. M., and Lauro, C. (2005). PLS path modeling. Comput. Stat. Data Anal. 48, 159–205. doi: 10.1016/j.csda.2004.03.005

Crossref Full Text | Google Scholar

Tsai, C. Y., Lin, Y. T., and Brown, I. K. (2024). Impacts of ChatGPT-assisted writing for EFL English majors: feasibility and challenges. Educ. Inf. Technol. 29, 22427–22445. doi: 10.1007/s10639-024-12722-y

Crossref Full Text | Google Scholar

Turel, O., Serenko, A., and Bontis, N. (2010). User acceptance of hedonic digital artifacts: a theory of consumption values perspective. Inf. Manag. 47, 53–59. doi: 10.1016/j.im.2009.10.002

Crossref Full Text | Google Scholar

Wang, G., Guo, Y., Zhang, W., Xie, S., and Chen, Q. (2023). What type of algorithm is perceived as fairer and more acceptable? A comparative analysis of rule-driven versus data-driven algorithmic decision-making in public affairs. Govern. Inf. Q. 40:101803. doi: 10.1016/j.giq.2023.101803

Crossref Full Text | Google Scholar

Wang, Q., Liu, C., and Lan, S. (2023). Digital literacy and financial market participation of middle-aged and elderly adults in China. Econ. Polit. Stud. 11, 441–468. doi: 10.1080/20954816.2022.2115191

Crossref Full Text | Google Scholar

Weiler, S., Matt, C., and Hess, T. (2022). Immunizing with information–Inoculation messages against conversational agents' response failures. Electron. Markets 32, 239–258. doi: 10.1007/s12525-021-00509-9

Crossref Full Text | Google Scholar

Wetzels, M., Odekerken-Schröder, G., and Van Oppen, C. (2009). Using PLS path modeling for assessing hierarchical construct models: guidelines and empirical illustration. MIS Q. 33, 177–195. doi: 10.2307/20650284

Crossref Full Text | Google Scholar

Wolf, V., and Maier, C. (2024). ChatGPT usage in everyday life: a motivation-theoretic mixed-methods study. Int. J. Inf. Manage. 79:102821. doi: 10.1016/j.ijinfomgt.2024.102821

Crossref Full Text | Google Scholar

Yan, M., Filieri, R., and Gorton, M. (2021). Continuance intention of online technologies: a systematic literature review. Int. J. Inf. Manage. 58:102315. doi: 10.1016/j.ijinfomgt.2021.102315

Crossref Full Text | Google Scholar

Zhong, W., Luo, J., and Lyu, Y. (2024). How do personal attributes shape AI dependency in Chinese higher education context? Insights from needs frustration perspective. PLoS ONE 19:e0313314. doi: 10.1371/journal.pone.0313314

Crossref Full Text | Google Scholar

Zhou, T., and Wang, M. (2025). Examining generative AI user discontinuance from a dual perspective of enablers and inhibitors. Int. J. Hum. Comput. Interact. 41, 6377–6387. doi: 10.1080/10447318.2025.2470280

Crossref Full Text | Google Scholar

Keywords: artificial intelligence, continuance intention, resistant to change, frustration, response failure

Citation: Satoto A, Ab Hamid NR, Kamudin N and Silalahi ADK (2025) The paradox of productive irritation: mapping the stress–coping loop that sustains generative-AI engagement. Front. Educ. 10:1709370. doi: 10.3389/feduc.2025.1709370

Received: 20 September 2025; Accepted: 28 October 2025;
Published: 09 December 2025.

Edited by:

Gemma Lluch, University of Valencia, Spain

Reviewed by:

Connie Phelps, Emporia State University, United States
Kevin Baldrich, University of Almeria, Spain

Copyright © 2025 Satoto, Ab Hamid, Kamudin and Silalahi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Andri Dayarana K. Silalahi, YW5kcmlka3NpbGFsYWhpQGdtYWlsLmNvbQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.