1 Introduction
The accelerating evolution of artificial intelligence (AI) has reshaped the educational landscape, driving the adoption of intelligent tools across education environments (Msambwa et al., 2025; Walter, 2024). Within this broader technological shift, AI chatbots have gained particular prominence in language education. Empirical studies consistently highlight their benefits, demonstrating improved vocabulary development, heightened engagement, and enhanced instructional efficiency (Almasri, 2024; Zhang and Huang, 2024; O'Neill et al., 2025). Their capacity to deliver interactive, personalized, and on-demand communicative practice offers learners and teachers opportunities to engage in meaningful language use beyond the constraints of traditional classrooms (Pan et al., 2024; Li et al., 2025). This interactive engagement is consistent with human–machine communication (HMC) (Guzman and Lewis, 2020; Guzman et al., 2023), as learners tend to attribute social presence and communicative intent to chatbots, perceiving them as meaningful interlocutors.
Yet the expanding presence of AI chatbots has generated debate, primarily due to their complex psychological implications for users. Evidence shows that Over-reliance on AI can undermine students' cognitive skills, critical thinking, creativity, and motivation, while simultaneously intensifying teachers' responsibilities and pressures associated with human–AI collaboration (Kim, 2024; Zhai et al., 2024). Other studies caution that substituting human interaction with AI chatbot communication may erode social support, weaken sense of belonging, and ultimately undermine wellbeing, academic persistence, and overall success (Crawford et al., 2024; Klimova and Pikhart, 2025). In contrast, an emerging body of research identifies clear psychological advantages: interactions with chatbots have been associated with reduced anxiety, increased confidence, and strengthened motivation in language learning (Wei et al., 2025; Zhang, 2026).
These contrasting findings underscore a crucial point: the impact of AI chatbots in language education cannot be fully understood through a purely technological lens. Despite growing evidence on the psychological effects of AI chatbots, existing findings remain fragmented, underscoring the need for a more critical, psychology-driven reconsideration of their role in language education. To move beyond the prevailing hype and surface-level enthusiasm, it is essential to analytically deconstruct the role of AI chatbots in language education through the lens of psychology. To be more specific, to study AI chatbots through a psychological lens in digital environments, this study examines them from three interrelated perspectives: cognitive, affective, and social lens (Mayer, 2024; Schneider et al., 2022). Unlike existing studies that primarily evaluate the effectiveness or feasibility of AI chatbots, this opinion argues for a psychological reorientation of how chatbots are conceptualized in language education. Specifically, it reframes AI chatbots as interactional presences that shape cognitive processing, emotional regulation, and social engagement in interconnected ways.
2 AI chatbots in language education through three psychological lenses
2.1 Cognitive psychological lens: enhanced attention but risks of cognitive overload
From a cognitive psychological perspective, AI chatbots appear to support learners' attentional focus and knowledge processing. Studies have shown that chatbot-assisted interactions can increase learners' engagement, deepen topic exploration, and scaffold comprehension in academic tasks such as research proposal writing (Smirnova, 2025). Similarly, voice-based chatbots have been found to sustain learners' attention and facilitate meaningful practice, partly due to their immediate, adaptive responses (Koç and Savaş, 2025).
However, these reported benefits often center on surface-level behavioral indicators—such as time-on-task or perceived engagement—rather than examining whether chatbots genuinely improve cognitive processing quality. Psychological risks are also evident. Chatbot-generated inaccuracies may create confusion, increase extraneous cognitive load, and weaken autonomous problem-solving (Huete-García and Tarp, 2024). The tendency of large language models to generate factually unreliable yet coherent responses further exacerbates this cognitive burden (Lappin, 2024). Moreover, learners may become behaviorally dependent on chatbots for tasks requiring generative thinking, potentially reducing opportunities for deeper cognitive engagement (Feng et al., 2025). Taken together, while chatbots may enhance cognitive engagement, their impact on cognitive quality remains unclear, warranting more critical scrutiny.
2.2 Affective psychological lens: reduced anxiety but uncertain long-term outcomes
From an affective psychological lens, studies have shown that chatbots reduce learner anxiety and foster low-stress learning environments. Studies report reductions in foreign language anxiety, increases in confidence, and greater willingness to communicate when learners interact with AI chatbots rather than human partners (Kohnke et al., 2023; Zhang, 2026; Kim and Su, 2024). Furthermore, AI chatbots validate learners' non-native language output—accurately interpreting diverse accents and learner varieties—helping reduce reliance on native-speaker norms and boosting perceived competence (Lee et al., 2025). Yet, affective benefits still require careful interpretation. As many studies rely heavily on self-reported perceptions, their findings risk conflating positive emotional reactions with genuine learning improvement. Reduced anxiety may also lead learners to prefer chatbot-based communication and avoid real social interaction, limiting their ability to cope with authentic communicative pressure. Moreover, the short-term motivational boosts reported in the literature offer limited insight into whether chatbots sustain long-term engagement or language development. Thus, while chatbots often enhance emotional comfort, the psychological implications for long-term application remain insufficiently explored.
2.3 Social psychological lens: increased social connectedness but erosion of human interaction
From a social psychological perspective, AI chatbots can shape learners' sense of connection, belonging, and relational trust. Politeness-enabled chatbots, for instance, have been shown to increase users' trust, reduce defensiveness, and foster a sense of social connectedness, contributing to more positive interactional experiences (Brummernhenrich et al., 2025). These socially meaningful interactions also reveal a dynamic socio-technical relationship, suggesting that attention to visual and emotional design elements can enhance engagement and overall user experience (Haqqu et al., 2025). Learners' perceptions of chatbots have also been linked to improvements in self-perceived competence and writing engagement (Mills et al., 2025). In these cases, chatbots serve as supportive, non-judgmental social partners.
However, social psychological research also warns of potential relational costs. When chatbots replace human-human interaction, they may inadvertently reduce social support, weaken learners' sense of belonging, and ultimately undermine wellbeing and academic persistence (Crawford et al., 2024; Klimova and Pikhart, 2025). Moreover, reliance on AI-mediated interaction risks diminishing opportunities for genuine social negotiation—an essential component of language learning. In broader terms, ethical concerns related to privacy, data misuse, and algorithmic bias can further erode trust in AI systems (Zhang and Yu, 2025; Labadze et al., 2023; Yigci et al., 2025). These studies suggest that while AI chatbots can foster social connectedness in controlled contexts, their long-term social implications in educational settings remain ambiguous and require careful consideration.
In this way, the cognitive, affective, and social dimensions discussed above should not be understood as independent effects of AI chatbot use. Specifically, reduced anxiety may lower cognitive load and encourage engagement, while perceived social presence can enhance motivation and sustained participation. At the same time, cognitive reliance on chatbots may gradually reshape learners' emotional comfort and preferences for social interaction. Overall, these dynamics suggest that the psychological impact of AI chatbots in language education emerges from the interplay of cognitive, affective, and social processes rather than from isolated mechanisms.
3 Future directions
Moving forward, current studies on AI chatbots in language education have generated valuable insights into cognitive engagement, emotional responses, and social experiences (Ebadi and Amini, 2024; Zou et al., 2025). To advance understanding in this area, several directions warrant further exploration.
First of all, although this study provides insights into users' cognitive, emotional, and social experiences, there remains a need to better understand their interrelations in chatbot-mediated teaching. In the future, studies can investigate how different types of chatbot interactions influence cognitive load (Sweller, 1988), emotional engagement (Fredricks et al., 2004), and social connectedness (Gunawardena, 1995). Future studies could employ experimental, longitudinal, or mixed-method designs to examine how different types of chatbot interactions influence learners' cognitive load, emotional engagement, motivation, social connectedness, and trust across diverse educational contexts, such as university language courses, online learning, and blended learning environments. Such studies would provide a deeper understanding of the psychological pathways through which chatbots affect language education.
Second, pedagogical practice requires designs that intentionally scaffold these three domains. Rather than allowing chatbots to replace higher-order thinking, instruction should encourage students and teachers to interrogate, compare, and refine chatbot outputs. Affective scaffolding should balance anxiety-reducing practice with authentic communicative demands (Krashen, 1982) to prevent overreliance on low-pressure environments. Socially, chatbots should be positioned as supplementary partners within a community of inquiry (Garrison et al., 1999) rather than substitutes for peers or teachers. In particular, chatbots can be integrated as preparatory or reflective tools to support idea generation, low-stakes rehearsal, and the interpretation of feedback. Core processes of meaning-making, negotiation, and evaluative judgment should remain anchored in instructional interaction. Future research can also evaluate how different instructional designs, such as preparatory or reflective chatbot activities, affect students' engagement, anxiety reduction, and reflective use, using classroom observation, interaction analysis, and learning outcome assessment. Thus, instructional integration foregrounds human agency and interaction, with AI chatbots functioning as structured support alongside essential pedagogical relationships.
Finally, institutional policies should account for the psychological implications of AI-assisted education. Clear usage guidelines, professional training, and support systems can help teachers manage the cognitive, emotional and psychological demands of supervising AI-mediated activities. Ethical safeguards on data privacy, informed consent, and bias detection are also necessary to uphold users' wellbeing and trust. Teachers play a critical role in guiding students' engagement with AI chatbots by monitoring cognitive reliance, facilitating emotional regulation, and fostering reflective use. Educational institutions are responsible for establishing governance structures, providing professional training, and setting ethical guidelines that determine how and why chatbots are implemented in formal learning contexts.
4 Conclusion
This research argues that much of the current enthusiasm surrounding AI chatbots in language education risks oversimplifying their pedagogical value. By moving beyond the prevailing hype and reframing chatbots through cognitive, affective, and social psychological lenses, this article shows that their impact is far more nuanced than commonly claimed. While evidence demonstrates clear benefits—such as enhanced attentional focus, reduced anxiety, and increased social connectedness—equally important psychological risks emerge, including cognitive overload, avoidance of authentic communication, and weakened human interaction. The contribution of this study lies in synthesizing these fragmented findings into a coherent psychological interpretation, offering a more balanced and comprehensive understanding of how chatbots actually shape language education. This perspective underscores the need for future research to examine underlying psychological processes rather than surface-level perceptions, encourages pedagogical designs that intentionally scaffold cognitive, emotional, and social development, and calls for institutional policies that ensure ethical, transparent, and human-centered AI use. Crucially, the ethical use of AI chatbots in language education requires institutional governance regarding data ownership, transparency in system decision-making, and mechanisms to safeguard user trust. Without such structures, potential psychological benefits may be undermined by concerns over privacy, accountability, and power asymmetries. Taken together, a psychological reorientation enables the field to evaluate AI chatbots not as inherently beneficial or harmful tools, but as complex cognitive-affective-social technology whose value depends on how they are designed, integrated, and governed.
Statements
Author contributions
XZ: Writing – original draft, Writing – review & editing. CL: Writing – review & editing.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was used in the creation of this manuscript. Grammarly was used for language refinement, and all edits were reviewed and approved by the author(s) to ensure academic integrity
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
AlmasriF. (2024). Exploring the impact of artificial intelligence in teaching and learning of science: a systematic review of empirical research. Res. Sci. Educ.54, 977–997. doi: 10.1007/s11165-024-10176-3
2
BrummernhenrichB.PaulusC. L.JucksR. (2025). Applying social cognition to feedback chatbots: enhancing trustworthiness through politeness. Br. J. Educ. Techn.56, 2321–2340. doi: 10.1111/bjet.13569
3
CrawfordJ.AllenK.-A.PaniB.CowlingM. (2024). When artificial intelligence substitutes humans in higher education: the cost of loneliness, student success, and retention. Stud. High. Educ.49, 883–897. doi: 10.1080/03075079.2024.2326956
4
EbadiS.AminiA. (2024). Examining the roles of social presence and human-likeness on Iranian EFL learners' motivation using artificial intelligence technology: a case of CSIEC chatbot. Interact. Learn. Environ.32, 655–673. doi: 10.1080/10494820.2022.2096638
5
FengX.CaiL.LiuW.ZhangX. (2025). Chatbots in children's collaborative making: exploring challenges and implications for interaction design. Springer. 15935, 52–60. doi: 10.1007/978-3-032-02534-0_7
6
FredricksJ. A.BlumenfeldP. C.ParisA. H. (2004). School engagement: potential of the concept, state of the evidence. Rev. Educ. Res.74, 59–109. doi: 10.3102/00346543074001059
7
GarrisonD. R.AndersonT.ArcherW. (1999). Critical inquiry in a text-based environment: computer conferencing in higher education. Inter. High. Educ.2, 87–105. doi: 10.1016/S1096-7516(00)00016-6
8
GunawardenaC. N. (1995). Social presence theory and implications for interaction and collaborative learning in computer conferences. Inter. J. Educ. Telecommun.1, 147–166.
9
GuzmanA. L.LewisS. C. (2020). Artificial intelligence and communication: a human-machine communication research agenda. New Media Soc. 22, 70–86. doi: 10.1177/1461444819858691
10
GuzmanA. L.McEwenR.JonesS. (2023). The SAGE Handbook of Human-Machine Communication.Thousand Oaks: SAGE Publications Ltd. doi: 10.4135/9781529782783
11
HaqquR.ZahraniA. R.WulandariA.ErsyadF. A.AdimA. K. (2025). Human-AI in affordance perspective: a study on chatGPT users in the context of Indonesian users. Front. Comput. Sci.7:1623029. doi: 10.3389/fcomp.2025.1623029
12
Huete-GarcíaÁ.TarpS. (2024). Training an AI-based writing assistant for Spanish learners: the usefulness of chatbots and the indispensability of human-assisted intelligence. Lexikos34, 21–40. doi: 10.5788/34-1-1862
13
KimA.SuY. (2024). How implementing an AI chatbot impacts Korean as a foreign language learners' willingness to communicate in Korean. System122:103256. doi: 10.1016/j.system.2024.103256
14
KimJ. (2024). Leading teachers' perspective on teacher-AI collaboration in education. Educ. Inf. Technol.29, 8693–8724. doi: 10.1007/s10639-023-12109-5
15
KlimovaB.PikhartM. (2025). Exploring the effects of artificial intelligence on student and academic wellbeing in higher education: a mini-review. Front. Psychol.16:1498132. doi: 10.3389/fpsyg.2025.1498132
16
KoçF. S.SavaşP. (2025). The use of artificially intelligent chatbots in English language learning: a systematic meta-synthesis study of articles published between 2010 and 2024. ReCALL.37, 4–21. doi: 10.1017/S0958344024000168
17
KohnkeL.MoorhouseB. L.ZouD. (2023). ChatGPT for language teaching and learning. Relc J.54, 537–550. doi: 10.1177/00336882231162868
18
KrashenS. (1982). Principles and Practice in Second Language Acquisition. Oxford: Pergamon Press.
19
LabadzeL.GrigoliaM.MachaidzeL. (2023). Role of AI chatbots in education: systematic literature review. Inter. J. Educ. Technol. High. Educ.20:56. doi: 10.1186/s41239-023-00426-1
20
LappinS. (2024). Assessing the strengths and weaknesses of large language models. J. Log. Lang. Inf.33, 9–20. doi: 10.1007/s10849-023-09409-x
21
LeeS.JeonJ.ChoeH. (2025). Enhancing pre-service teachers' global englishes awareness with technology: a focus on AI Chatbots in 3D metaverse environments. TESOL Quart.59, 49–74. doi: 10.1002/tesq.3300
22
LiY.ZhouX.ChiuT. K. (2025). Systematics review on artificial intelligence chatbots and chatGPT for language learning and research from self-determination theory (SDT): what are the roles of teachers?Inter. Learn. Envir.33, 1850–1864. doi: 10.1080/10494820.2024.2400090
23
MayerR. E. (2024). The past, present, and future of the cognitive theory of multimedia learning. Educ. Psychol. Rev.36:8. doi: 10.1007/s10648-023-09842-1
24
MillsN.HokH.DressenA.VeillasQ. (2025). The design and evaluation of an interactive AI companion for foreign language writing. Foreign Lang. Ann.58, 40–69. doi: 10.1111/flan.12790
25
MsambwaM. M.WenZ.DanielK. (2025). The impact of AI on the personal and collaborative learning environments in higher education. Eur. J. Educ.60:e12909. doi: 10.1111/ejed.12909
26
O'NeillI.FerrarioM. A.DoreT. (2025). Getting GPT to tutor like me. Interact. Comput.36:041. doi: 10.1093/iwc/iwaf041
27
PanM.GuoK.LaiC. (2024). Using artificial intelligence chatbots to support english-as-a-foreign language students' self-regulated reading. RELC J. 56:264030. doi: 10.1177/00336882241264030
28
SchneiderS.BeegeM.NebelS.SchnaubertL.ReyG. D. (2022). The cognitive-affective-social theory of learning in digital environments (CASTLE). Educ. Psychol. Rev.34, 1–38. doi: 10.1007/s10648-021-09626-5
29
SmirnovaL. (2025). Developing students' agency and voice by using generative AI in an online EAP module. Innov. Lang. Learn. Tea. 1–11. doi: 10.1080/17501229.2025.2538781
30
SwellerJ. (1988). Cognitive load during problem solving: effects on learning. Cogn. Sci.12, 257–285. doi: 10.1207/s15516709cog1202_4
31
WalterY. (2024). Embracing the future of artificial intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education. Inter. J. Educ. Technol. High. Educ.21:15. doi: 10.1186/s41239-024-00448-3
32
WeiW.ZhaoA.MaH. (2025). Understanding how AI chatbots influence EFL learners' oral English learning motivation and outcomes: evidence from Chinese learners. IEEE Access.13, 56699–56716. doi: 10.1109/ACCESS.2025.3554545
33
YigciD.EryilmazM.YetisenA. K.TasogluS.OzcanA. (2025). Large language model-based chatbots in higher education. Adv. Intell. Syst.7:2400429. doi: 10.1002/aisy.202400429
34
ZhaiC.WibowoS.LiL. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learn. Environ.11:28. doi: 10.1186/s40561-024-00316-7
35
ZhangW. (2026). The impact of AI chatbots on EFL learners' oral proficiency and willingness to communicate. System.136:103919. doi: 10.1016/j.system.2025.103919
36
ZhangY.YuZ. (2025). Emotional attachment as the key mediator: expanding UTAUT2 to examine how perceived anthropomorphism in intelligent agents influences the sustained use of DouBao (Cici) among EFL learners. Educ. Inf. Technol. 30, 1–23. doi: 10.1007/s10639-025-13721-3
37
ZhangZ.HuangX. (2024). The impact of chatbots based on large language models on second language vocabulary acquisition. Heliyon10:e25370. doi: 10.1016/j.heliyon.2024.e25370
38
ZouB.WangC.HeH.LiC.PurwantoE.WangP. (2025). Enhancing EFL writing with visualised GenAI feedback: a cognitive affective theory of learning perspective on revision quality, emotional response, and human-computer interaction. Learn. Motiv.91:102158. doi: 10.1016/j.lmot.2025.102158
Summary
Keywords
a psychological perspective, affective lens, AI chatbots, cognitive lens, language education, social lens
Citation
Zhou X and Li C (2026) Beyond the hype: a psychological perspective on AI chatbots. Front. Psychol. 17:1766427. doi: 10.3389/fpsyg.2026.1766427
Received
12 December 2025
Revised
29 January 2026
Accepted
30 January 2026
Published
05 March 2026
Volume
17 - 2026
Edited by
Daniel H. Robinson, The University of Texas at Arlington College of Education, United States
Reviewed by
Rizca Haqqu, Telkom University, Indonesia
Updates
Copyright
© 2026 Zhou and Li.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Xuanxuan Zhou, p122141@siswa.ukm.edu.my; Chengli Li, p131127@siswa.ukm.edu.my
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.