- 1Department of Computer Science and Engineering, University of Nevada, Reno, NV, United States
- 2Department of Information Technology, Kennesaw State University, Kennesaw, GA, United States
Adaptive learning platforms are increasingly used to enhance online education, yet a gap exists in understanding how the design of their AI-powered features impacts user experience. This study addresses this gap by evaluating three prominent platforms–Khan Academy, Coursera, and Codecademy–in teaching HTML. Using a mixed-methods approach with 23 participants, we assessed task completion time, user satisfaction, engagement, and task accuracy. Results revealed significant performance differences: Codecademy offered the fastest task completion, while Khan Academy achieved the highest user satisfaction. A crucial finding emerged from qualitative and quantitative data: participants found the specific AI-driven adaptive features on all platforms to be subtle and minimally impactful, with core platform interactivity being a more dominant factor. This study's main contribution is the identification of a critical trade-off between learning efficiency and user engagement, which is mediated by the discoverability and perceived value of adaptive features. We conclude that for AI-powered educational tools to realize their full potential, their adaptive features must be more discoverable, intuitive, and integral to the core learning loop. The study provides actionable insights for designers and educators seeking to balance platform efficiency with a more personalized and motivating user experience.
1 Introduction
Adaptive learning systems have emerged as a significant innovation in educational technology, offering the potential to transform the learning experience through personalized content delivery (Strielkowski et al., 2025). These systems, powered by artificial intelligence (AI), dynamically adjust educational materials to align with individual learners' needs, providing real-time feedback, tracking progress, and tailoring content to optimize learning outcomes (Isaeva et al., 2025; Luckin et al., 2016). As the global e-learning market continues its rapid expansion, projected to exceed $400 billion by 2026, the demand for effective and engaging digital education tools has never been greater (Pelletier et al., 2021). As the demand for flexible and learner-centered approaches continues to grow, platforms such as Khan Academy, Coursera, and Codecademy have gained prominence, integrating adaptive technologies to enhance user engagement and educational effectiveness. Unlike traditional static teaching methods, adaptive systems promise to offer more efficient, motivating, and engaging learning experiences, which have become increasingly important in the modern educational landscape (Jamali et al., 2024; Brusilovsky and Millán, 2007).
The broader field of adaptive learning encompasses developments in AI, human-computer interaction (HCI), and personalized education. AI technologies serve as the backbone of adaptive platforms, enabling systems to modify content based on user performance, preferences, and interactions. These systems are designed to cater to the specific pace and learning style of each individual, fostering a more personalized educational environment (Troussas et al., 2023; Jamali et al., 2025b). Despite the advances in adaptive learning technologies, significant challenges remain, particularly in understanding how these systems impact user experience and engagement across diverse platforms. Although adaptive learning has been shown to improve educational outcomes (Brusilovsky and Millán, 2007), there is a need for further exploration into how these systems influence user motivation, task completion, and satisfaction in practical applications.
Existing research has primarily focused on the technical and algorithmic aspects of adaptive learning systems, emphasizing the benefits of personalized content delivery in improving learning pathways and performance (Jamali et al., 2025a; Koper, 2014). Studies on adaptive learning technologies have highlighted the capacity of AI to enhance instructional design, leading to more efficient learning experiences. However, despite the growing body of literature on the technical capabilities of these systems, there is a lack of comprehensive evaluations regarding the user experience. Specifically, few studies have investigated how learners perceive the adaptive features embedded within educational platforms and how these features influence their motivation and engagement. This gap is critical, as the pedagogical effectiveness of an AI-driven tool is ultimately mediated by the learner's ability and willingness to use it. This presents an opportunity to explore the human-computer interaction aspects of adaptive learning, particularly in terms of usability, user satisfaction, and the overall effectiveness of adaptive feedback mechanisms (Jamali et al., 2018; MacKenzie, 2013).
The motivation for this study arises from the need to address these gaps by investigating the user experience of adaptive learning platforms. While the technical advancements of AI in education are well-documented, the interactional and motivational aspects from the user's perspective remain underexplored (Singh et al., 2025). This research seeks to evaluate the usability, engagement, and pedagogical efficacy of three prominent adaptive learning platforms—Khan Academy, Coursera, and Codecademy—in the context of HTML learning. These platforms were selected for their differing levels of adaptivity and distinct approaches to personalized education. The study aims to assess how adaptive learning features influence user experience, with a focus on engagement, motivation, and task completion. By evaluating participant interactions across these platforms, the study will provide valuable insights into the design and effectiveness of adaptive educational technologies (Jamali et al., 2025a).
The following research questions will guide the study:
1. How does the task completion time for HTML exercises vary across different adaptive learning platforms?
2. What differences in task accuracy are observed among participants using Khan Academy, Coursera, and Codecademy?
3. How do participants perceive the effectiveness of adaptive features across the studied platforms?
4. Which features do participants find most engaging or motivating during their learning experience?
The objectives of this study are to evaluate how adaptive learning interfaces impact user engagement, motivation, and learning outcomes, and to provide actionable insights for improving the design of such platforms. By identifying the features that most effectively enhance user experience, this research contributes to the broader field of AI-driven education and informs future efforts to optimize adaptive learning environments for a diverse range of learners.
2 Literature review
A comprehensive review of the literature reveals three core areas relevant to this study: the theoretical foundations of AI-powered adaptive learning, frameworks for evaluating user experience (UX) in educational technology, and findings from comparative analyses of online learning platforms.
2.1 AI-powered adaptive learning systems
Adaptive learning systems have a long history through intelligent tutoring systems (ITS), which adapt instruction to learners' responses and knowledge states (Corbett and Anderson, 1994). Modern advances in AI and machine learning have expanded these systems to support dynamic, data-driven personalization rather than only rule-based adaptation (Martin et al., 2020). Learner modeling remains central: systems monitor student performance, preferences, and interactions to adjust content or support delivered to each individual. Cross-cultural research confirms that while such systems can enhance learning efficiency, their effectiveness often varies across cultural contexts, with user acceptance being higher in educational systems that prioritize personalized learning paths, as shown in a comparative study of pre-service teachers in Turkey and the UAE (Konca et al., 2025). Recent research exploring the impact of personalized scaffolding using AI in higher education demonstrates that scaffolding agents can support self-regulated learning and reflection, though effects on motivation or cognitive load are sometimes variable (Wang, 2025; Siu et al., 2025).
2.2 Theoretical frameworks for UX in educational technology
User experience (UX) in EdTech goes beyond usability to include motivational and affective perceptions. The Technology Acceptance Model (TAM) remains influential, positing that perceived usefulness and ease of use are the primary predictors of technology adoption (Davis, 1989). From a motivational standpoint, Self-Determination Theory (SDT) provides a powerful lens, suggesting that platforms are more engaging when they satisfy learners' basic psychological needs for autonomy (control over their learning), competence (a sense of mastery), and relatedness (Ryan and Deci, 2000). Furthermore, Cognitive Load Theory (CLT) is critical for evaluating interface design, suggesting that effective platforms should minimize extraneous cognitive load (e.g., confusing navigation) to free up mental resources for the intrinsic complexity of the learning material itself (Sweller et al., 1998). Our study evaluates platforms through the integrated lens of these theories, assessing how their design choices impact user perception, motivation, and cognitive processing.
2.3 Comparative studies and research gap
Several studies have compared educational platforms in terms of learning outcomes and engagement. A systematic review by Martin et al. (2020) found that while many studies focus on performance metrics, fewer investigate how learners perceive or discover adaptive features. Similarly, a large multi-national study on teacher trust in AI-EdTech found that factors like self-efficacy and cultural values significantly predict teachers' perceived benefits and concerns, which in turn influences their trust in the technology (Viberg et al., 2025). This underscores that user perception is a critical, yet understudied, mediator of AI adoption. Thus, a gap remains: few studies have conducted a direct, user-centric comparison of major commercial platforms that employ different AI-driven adaptive strategies, especially with respect to how users (a) become aware of adaptive features, (b) perceive their value, and (c) interact with them. This study addresses this gap by comparing three distinct platforms through a mixed-methods lens, focusing specifically on the discoverability and perceived value of their adaptive functionalities.
3 Methodology
3.1 Study design
The methodology for this study is designed to accurately assess the usability and effectiveness of AI-driven adaptive learning platforms. It is important to note that this study was designed as exploratory research. Consequently, the quantitative findings are intended to identify significant trends and generate hypotheses rather than to establish generalizable population parameters. It utilizes a convergent parallel mixed-methods design, where quantitative and qualitative data were collected concurrently, analyzed separately, and then merged to provide a comprehensive understanding (Creswell and Clark, 2014). This approach is particularly valuable as it allows the qualitative findings (e.g., participants' comments on feature invisibility) to directly explain the quantitative results (e.g., the neutral Likert-scale ratings for adaptive feature helpfulness). The study employed a within-subjects (or repeated measures) design, where each of the 23 participants was exposed to all three experimental conditions–the adaptive learning platforms of Khan Academy, Coursera, and Codecademy. A Latin square design was used for counterbalancing to mitigate order effects.
3.2 Participants and ethics
A total of 23 participants were recruited from university students and individuals with a general interest in web development. The sample size (N = 23) was deemed appropriate for this exploratory study; it aligns with standards for achieving thematic saturation in qualitative research (Guest et al., 2006) and, as a post-hoc power analysis showed, provided adequate power (0.85) for detecting the large effect size observed in our primary quantitative outcome (completion time). Inclusion criteria prioritized basic computer literacy, but no prior knowledge of HTML was required. Ethical approval for this study was obtained from the University of Nevada, Reno Institutional Review Board, and all participants provided written informed consent.
3.3 Measures
The primary independent variable was the adaptive learning platform, with three levels: Khan Academy, Coursera, and Codecademy. The dependent variables were directly mapped to the research questions:
• Task Completion Time (RQ1): Measured in minutes to assess platform efficiency.
• Task Accuracy (RQ2): Participants' self-estimated percentage of correct actions, serving as a proxy for learning effectiveness.
• Perceptions of Adaptive Features (RQ3): Measured using a 5-point Likert scale for “Helpfulness of Adaptive Features” and through qualitative probes.
• Engaging Features (RQ4): Identified through open-ended questions about participants' favorite features and suggestions for improvement.
• Other measures included user satisfaction, engagement, and confidence levels, all rated on 5-point Likert scales.
For the purposes of this study, “adaptive features” were defined as elements of the platform that use AI to personalize the learning experience. Examples included contextual hints provided by Codecademy's AI Learning Assistant, personalized guidance from the Coursera Coach, and the mastery-based progression system in Khan Academy.
3.4 Procedure
Upon arrival, participants completed a pre-task questionnaire (see Appendix B). They then received a brief demonstration of each platform before working through a standardized set of HTML tutorials and exercises on all three platforms. Each participant was allotted 30–60 min per platform. After each session, they completed a post-task questionnaire capturing their impressions of that specific platform. The study was conducted in a controlled lab environment at the University of Nevada, Reno, using standardized hardware and software to minimize variability.
3.5 Data analysis
Quantitative data was analyzed using a one-way repeated-measures Analysis of Variance (ANOVA) to determine statistically significant differences in completion time, accuracy, and satisfaction across platforms. Post-hoc tests with Bonferroni correction were used to identify specific pairwise differences. Qualitative data was analyzed using the six-phase thematic analysis process outlined by Braun and Clarke (2006). This involved: (1) familiarization with the data by reading all open-ended responses multiple times; (2) generating initial codes for interesting features of the data; (3) searching for potential themes by collating codes into broader patterns; (4) reviewing themes to ensure they formed a coherent pattern; (5) defining and naming the final themes; and (6) producing the analysis, including selecting vivid participant quotes.
3.5.1 Reliability and validity measures
The validity of the quantitative measures was strengthened by the within-subjects design. For qualitative analysis, the primary researcher engaged in iterative coding to ensure consistency. While inter-rater reliability was not formally calculated due to resource constraints typical of exploratory research, credibility was enhanced through a process of reflexive journaling and peer debriefing with a co-author to challenge assumptions and refine the thematic structure. However, as key metrics were single-item ratings, Cronbach's alpha could not be calculated. This is noted as a limitation.
4 Analyses performed and results obtained
This section details the statistical analyses conducted and presents the findings. Detailed inferential statistics are provided in Appendix A.
4.1 Participant demographics and background
The study sample consisted of 23 participants with a mean age of 23.0 years. The gender distribution was 65.2% male (n = 15) and 34.8% female (n = 8). In terms of education, 61% (n = 14) were undergraduate students, while 39% (n = 9) were pursuing graduate degrees. Regarding technical background, the majority of participants (74%, n = 17) classified themselves as beginners in HTML; 17% (n = 4) were at an intermediate level, and 9% (n = 2) had no prior experience.
4.2 Quantitative analyses
4.2.1 Task completion time (RQ1)
The key quantitative results for task completion time, accuracy, and user satisfaction across the three platforms are summarized in Table 1. A one-way repeated-measures ANOVA revealed a significant main effect of platform on task completion time, F(2, 44) = 8.31, p < 0.001, η2 = 0.27. Mean completion times varied substantially, with Codecademy being the most efficient (M = 22.8 min, SD = 2.1), followed by Khan Academy (M = 35.2 min, SD = 2.9) and Coursera (M = 36.5 min, SD = 3.2). Post-hoc analyses confirmed that Codecademy was significantly faster than both Khan Academy (p = 0.002) and Coursera (p < 0.001) (Figure 1).
Figure 1. Mean task completion time across platforms. This figure illustrates Codecademy's significantly faster task completion times compared to Khan Academy and Coursera. Error bars represent standard error of the mean.
4.2.2 Task accuracy (RQ2)
An ANOVA on self-reported task accuracy also revealed a significant effect of platform, F(2, 44) = 5.21, p = 0.009, η2 = 0.19. Participants reported the highest accuracy on Codecademy (M = 92.8%, SD = 3.5), followed by Khan Academy (M = 88.1%, SD = 5.2), and Coursera (M = 80.3%, SD = 4.1). Post-hoc tests showed that accuracy on Coursera was significantly lower than on both Codecademy (p = 0.001) and Khan Academy (p = 0.015).
4.2.3 User satisfaction and perceptions of adaptive features (RQ3)
An ANOVA on user satisfaction ratings revealed a significant effect of platform, F(2, 44) = 3.52, p = 0.037, η2 = 0.14. Mean satisfaction was highest for Khan Academy (M = 4.2, SD = 0.6), followed by Codecademy (M = 4.0, SD = 0.7) and Coursera (M = 3.7, SD = 0.8). However, ratings for the helpfulness of adaptive features were consistently neutral across all platforms, indicating they were not perceived as impactful (Figure 2).
Figure 2. Participant ratings of the user experience. (a) Shows consistent ease of navigation but neutral ratings for adaptive feature helpfulness across all platforms. (b) Highlights that Khan Academy led in both self-reported engagement and satisfaction.
4.3 Qualitative findings (RQ3 & RQ4)
Thematic analysis of open-ended responses revealed three key themes. First, the subtlety and lack of discoverability of adaptive features was a prominent finding. As Participant 1 noted, they “Didn't notice any adaptive features”. Another participant remarked, “The platform just felt like a normal course; I couldn't tell what was 'adaptive' about it”. Second, the critical role of interactivity emerged as the most engaging feature. Coursera was frequently criticized for its “Lack of interactivity”, while Khan Academy's “Video-exercise combination” and Codecademy's “Immediate feedback” were praised. As one user stated, “Getting instant feedback on Codecademy made me feel like I was actually learning, not just watching”. Third, participants' suggestions for platform improvements consistently centered on a desire for “More meaningful gamification” and more explicit adaptive tools like intuitive quizzes (Figure 3).
Figure 3. Frequency of recurring themes from qualitative data. The chart shows “Engagement” and “Intuitive Quizzes” were the most frequently mentioned positive themes, while “Lack of Interactivity” was a common critique. Bars represent the number of participants who mentioned each theme.
5 Discussion
This section synthesizes the findings in relation to the research questions and the broader literature, discusses the study's theoretical and practical contributions, and outlines its limitations.
5.1 Summary of RQ answers
RQ1: Task completion time varied significantly, with Codecademy being the fastest. RQ2: Task accuracy also differed significantly, with Coursera users reporting the lowest accuracy. RQ3: Participants perceived adaptive features as subtle and minimally impactful across all platforms. RQ4: The most engaging features were those related to core interactivity (hands-on exercises, immediate feedback), not the specific AI adaptive tools.
5.2 Key insights and theoretical contributions
The results emphasized the distinct strengths of each platform, revealing a critical insight: the mere presence of AI-powered adaptive features does not guarantee a positive user experience. This finding contributes to the literature by highlighting the efficiency-engagement trade-off, which can be understood through established theoretical frameworks.
• Codecademy, with its fast completion times and high accuracy, optimized for efficiency. From a Cognitive load theory perspective, its streamlined, interactive design minimized extraneous cognitive load, allowing users to focus their mental resources directly on the coding tasks.
• Khan academy, in contrast, optimized for engagement and satisfaction. Its combination of video tutorials and immediate hands-on exercises directly supported learners' need for competence as described by Self-Determination Theory. This guided structure provided a sense of mastery that users found highly satisfying, even if it took longer.
• Our study suggests that the Technology acceptance model helps explain this trade-off. Because the AI features were not discoverable, their “perceived usefulness” was neutral. Consequently, users based their judgment on the core platform's design. This makes the perceived usefulness of the primary interaction loop the dominant factor in their experience, mediating the balance between efficiency and engagement.
5.3 Comparison with the broader EdTech landscape
Our findings align with and extend existing research. The challenge of making AI features discoverable is not unique to our studied platforms and has been noted in research on Duolingo (Settles and Meeder, 2018) and hybrid human-AI tutoring systems (Thomas et al., 2024). Our results mirror a key challenge identified in recent studies on generative AI in education: the most technically advanced feature is not always the most practically effective if its function is not immediately clear to the learner (Kasneci et al., 2024). Furthermore, our findings confirm the conclusions of Brusilovsky and Millán (2007) and Luckin et al. (2016) regarding the importance of robust learner models and clear feedback loops.
5.4 Practical implications
Our findings offer distinct, actionable insights for multiple stakeholders:
• For developers: The “invisible AI” problem is a critical design challenge. Instead of subtle background adaptations, developers should consider making adaptive support more explicit (e.g., “We noticed you're struggling with this concept. Here's a different explanation”.). The trade-off between efficiency (Codecademy's strength) and guided engagement (Khan Academy's strength) should be a conscious design choice.
• For educators: Platform selection should align with pedagogical goals. For foundational skill acquisition where speed and accuracy are paramount, a platform like Codecademy may be ideal. For deeper engagement and supporting learners who benefit from more scaffolding, a platform like Khan Academy may be superior.
• For learners: This study highlights the importance of actively exploring a platform's features. Learners should be encouraged to seek out and experiment with tools like AI assistants or personalized feedback to maximize their learning experience.
5.5 Limitations and future work
The study's findings are contextualized by some limitations. First, the small sample size (N = 23), while adequate for qualitative saturation, limits the generalizability of the findings. Future replication studies should therefore utilize larger and more diverse cohorts. Second, our measures were limited and did not include direct assessments of motivation, cognitive load, or long-term knowledge retention. Third, the focus on a single topic (HTML) may not represent user experiences in other domains. Our participant pool consisted primarily of university students in the United States, which may introduce a cultural and contextual bias. Finally, key reliability metrics like inter-rater reliability were not assessed. Future research should extend these findings by incorporating larger, more diverse samples, adopting longitudinal designs to measure learning over time, and including validated instruments to measure motivation and cognitive load, as suggested by the findings of Singh et al. (2025).
6 Conclusion
This study examined the usability and effectiveness of three adaptive learning platforms. The findings revealed significant differences across platforms in task completion time, accuracy, and user satisfaction. Codecademy demonstrated the fastest completion time, while Khan Academy was rated highest in overall satisfaction. Qualitative feedback identified that core interactivity, not subtle AI features, was the primary driver of a positive user experience. This research highlights the importance of balancing efficiency and engagement in adaptive platform design. Future studies should explore broader topics and larger participant pools to further advance adaptive learning systems.
Data availability statement
Due to IRB restrictions and participant privacy protection, the individual-level data supporting this study cannot be made publicly available. Aggregated data and analysis scripts may be available from the corresponding author upon reasonable request, subject to IRB approval. The study questionnaires are included in Appendix B.
Ethics statement
The studies involving humans were approved by University of Nevada, Reno Institutional Review Board (IRB). The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
HJ: Writing – review & editing, Writing – original draft, Visualization, Methodology, Validation, Investigation, Software, Data curation, Conceptualization. SD: Funding acquisition, Project administration, Writing – review & editing, Supervision. FH: Funding acquisition, Project administration, Supervision, Writing – review & editing. RW: Writing – review & editing.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. This material was based in part upon work supported by the National Science Foundation under grant #OAC-2209806, #OIA-2148788, and #2517218.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declare that no Gen AI was used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Author disclaimer
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcomp.2025.1672081/full#supplementary-material
References
Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qual. Res. Psychol. 3, 77–101. doi: 10.1191/1478088706qp063oa
Brusilovsky, P., and Millán, E. (2007). “User models for adaptive hypermedia and adaptive educational systems,” in The Adaptive Web: Methods and Strategies of Web Personalization, eds. P. Brusilovsky, A. Kobsa, and W. Nejdl (Cham: Springer), 3–53.
Corbett, A. T., and Anderson, J. R. (1994). Knowledge tracing: A model of student learning. User Model. User-adapt. Interact. 4, 253–278. doi: 10.1007/BF01099821
Creswell, J. W., and Clark, V. L. P. (2014). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. New York City: Sage Publications.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart. 13, 319–340. doi: 10.2307/249008
Guest, G., Bunce, A., and Johnson, L. (2006). How many interviews are enough?: An experiment with data saturation and variability. Field Methods 18, 59–82. doi: 10.1177/1525822X05279903
Isaeva, R., Karasartova, N., Dznunusnalieva, K., Mirzoeva, K., and Mokliuk, M. (2025). Enhancing learning effectiveness through adaptive learning platforms and emerging computer technologies in education. Jurnal Ilmiah Ilmu Terapan Universitas Jambi 9, 144–160. doi: 10.22437/jiituj.v9i1.37967
Jamali, H., Dascalu, S. M., and Harris, F. C. Jr (2024). “Fostering joint innovation: a global online platform for ideas sharing and collaboration,” in Information Technology – New Generations, ed. S. Latifi (Cham: Springer), 305–312.
Jamali, H., Dascalu, S. M., Harris, F. C., and Feil-Seifer, D. (2025a). “Optimizing personalized learning pathways with the salp swarm algorithm: A novel approach,” in 2025 6th International Conference on Artificial Intelligence, Robotics and Control (AIRC) (Savannah, GA: IEEE), 291–297.
Jamali, H., Debolt, A., Dalton, H., Layosa, J., Macy, I., Shill, P., et al. (2025b). “Fore: a student-centered framework for accessible robotics education through simulation and interactive learning,” in 2025 ASEE Annual Conference & Exposition (Washington, DC: American Society for Engineering Education).
Jamali, H., Karimi, A., and Haghighizadeh, M. (2018). “A new method of cloud-based computation model for mobile devices: energy consumption optimization in mobile-to-mobile computation offloading,” in Proceedings of the 6th International Conference on Communications and Broadband Networking (New York, NY: ACM), 32–37.
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., et al. (2024). Chatgpt for good? on the implications of generative ai for education and society. Learn. Individ. Differ. 103:102274. doi: 10.1016/j.lindif.2023.102274
Konca, A. S., Simsar, A., Alhajji, R., and Al Mansoori, A. (2025). Cross-cultural perspectives on ai adoption in teacher education: a comparative study of pre-service teachers in turkey and the united arab emirates. Interact. Learn. Environm. doi: 10.1080/10494820.2025.2488143 [Epub ahead of print].
Koper, R. (2014). Conditions for effective smart learning environments. Smart Learn. Environm. 1:5. doi: 10.1186/s40561-014-0005-4
Luckin, R., Holmes, W., Griffiths, M., and Forcier, L. B. (2016). Intelligence Unleashed: An argument for AI in Education. London, UK: UCL Knowledge Lab.
MacKenzie, I. S. (2013). Human-Computer Interaction: An Empirical Research Perspective. San Francisco, CA: Morgan Kaufmann.
Martin, F., Chen, Y., Moore, R. L., and Westine, C. D. (2020). Systematic review of adaptive learning research designs, context, strategies, and technologies from 2009 to 2018. Educ. Technol. Res. Dev. 68, 1903–1929. doi: 10.1007/s11423-020-09793-2
Pelletier, K., Brown, M., Brooks, D. C., McCormack, M., Reeves, J., Arbino, N., et al. (2021). 2021 Educause Horizon Report, Teaching and Learning Edition. Technical Report. Nashville: EDUCAUSE.
Ryan, R. M., and Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 55:68. doi: 10.1037//0003-066X.55.1.68
Settles, B., and Meeder, B. (2018). “Birdbrain: A system for personalized language education,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (Stroudsburg, PA: Association for Computational Linguistics), 487–492.
Singh, A. K., Kiriti, M. K., Singh, H., and Shrivastava, A. (2025). Education AI: exploring the impact of artificial intelligence on education in the digital age. Int. J. Syst. Assuran. Eng. Managem. 1–14. doi: 10.1007/s13198-025-02755-y
Siu, K. W. M., Zou, J., and Jiang, Y. (2025). Dynamic scaffolding: exploring the role of artificial intelligence in urban design education. Front. Urban Rural Plan. 3:7. doi: 10.1007/s44243-025-00060-7
Strielkowski, W., Grebennikova, V., Lisovskiy, A., Rakhimova, G., and Vasileva, T. (2025). AI-driven adaptive learning for sustainable educational transformation. Sustain. Dev. 33, 1921–1947. doi: 10.1002/sd.3221
Sweller, J., Van Merrienboer, J. J., and Paas, F. G. (1998). Cognitive architecture and instructional design. Educ. Psychol. Rev. 10, 251–296. doi: 10.1023/A:1022193728205
Thomas, M., Warner, B., and Holstein, K. (2024). “Improving student learning with hybrid human–ai tutoring: a three-study quasi-experimental investigation,” in Proceedings of the 14th Learning Analytics and Knowledge Conference (LAK '24) (New York, NY: ACM).
Troussas, C., Krouska, A., and Sgouropoulou, C. (2023). “A novel framework of human-computer interaction and human-centered artificial intelligence in learning technology,” in Human-Computer Interaction and Augmented Intelligence: The Paradigm of Interactive Machine Learning in Educational Software (Cham: Springer), 19–35.
Viberg, O., Cukurova, M., Feldman-Maggor, Y., Alexandron, G., Wasson, B., Shirai, S., et al. (2025). What explains teachers' trust in AI in education across six countries? Int. J. Artif. Intellig. Educ. 35:1288–1316. doi: 10.1007/s40593-024-00433-x
Keywords: adaptive learning, user experience, human-computer interaction, educational technology, online learning, HTML
Citation: Jamali H, Dascalu SM, Harris FC Jr and Wu R (2025) AI-powered adaptive learning interfaces: a user experience study in education platforms. Front. Comput. Sci. 7:1672081. doi: 10.3389/fcomp.2025.1672081
Received: 23 July 2025; Accepted: 21 October 2025;
Published: 12 November 2025.
Edited by:
Xiaoxun Sun, Australian Council for Educational Research, AustraliaReviewed by:
Songyu Jiang, Rajamangala University of Technology Rattanakosin, ThailandChandan Pal Singh, South Kazakhstan State Pedagogical Institute, Kazakhstan
Copyright © 2025 Jamali, Dascalu, Harris and Wu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Hossein Jamali, aGphbWFsaUB1bnIuZWR1
Sergiu M. Dascalu1