Abstract
The growing presence of artificial intelligence (AI) across educational and workplace environments is reshaping how learners encounter tasks, interpret feedback, and navigate uncertainty. To understand these changes, this manuscript grounds AI's influence in theories of self-regulated learning (SRL), which conceptualize learning as a cyclical process of planning, monitoring, strategic adjustment, and reflection. Rather than replacing these processes, AI reshapes the conditions under which they occur by making some cues more visible, introducing new forms of guidance, and occasionally preempting difficulty before learners have an opportunity to engage with it. These shifts reveal a conceptual gap: although research documents both benefits and risks of AI-mediated support, we lack a framework for understanding how AI participates in learners' regulatory cycles across educational and professional settings without eroding the autonomy that underpins SRL. To address this gap, this article proposes a unified model of AI as a co-regulator within self-regulated learning, grounded in Winne and Hadwin's COPES architecture. The model centers productive metacognitive friction as a mechanism for sustaining learner-driven regulation by structuring how learners encounter challenge and discrepancy. It advances a relationally grounded framework at the level of interactional structure, positioning AI as a co-regulator through five design principles that specify conditions under which AI can support regulatory cycles without displacing learner judgment. These principles are linked to an evaluation architecture that centers autonomy, interpretability, process integrity, and developmental growth as evaluative priorities traced through learner–AI interaction patterns. Implications are examined across educational practice, workplace learning, equity, and governance, and directions for collaborative research and design are outlined to investigate how relationally aligned AI can preserve and strengthen the regulatory processes at the heart of SRL.
1 Introduction
1.1 The changing landscape of learning
Artificial intelligence is becoming an embedded component of how people learn, work, and navigate complex tasks. Tools that generate explanations, organize information, suggest strategies, and model potential approaches now operate alongside the practices learners already use to manage uncertainty. These systems do more than add new resources; they reshape the timing, visibility, and sensemaking clarity of the cues learners use to guide their thinking. Assistance that once required deliberate seeking, including feedback from peers, clarification from instructors, or self-generated strategies, can now appear instantly, sometimes embedded directly within the task.
These shifts raise questions about how learners stay oriented within their own processes of understanding. Immediate guidance can shape when learners pause, how they interpret moments of difficulty, and whether they explore strategies before acting. As AI becomes a stable presence in learning ecologies, the issue is no longer adoption but orientation: how these systems reshape opportunities to notice discrepancies, interpret challenge, and adjust action. This manuscript approaches AI's expanding role by examining how such interventions participate in regulatory activity already underway, positioning SRL rather than tool capability as the primary lens for understanding AI's educational and workplace impact (Molenaar, 2022).
1.2 Why SRL must be the anchor
To understand how AI shapes learning, we begin with the theory that explains how learners guide their own understanding. Across influential models developed by Zimmerman (2000), Pintrich et al. (1991), and Winne and Hadwin (1998, 2008), SRL is conceptualized as a dynamic process in which learners plan, monitor, adjust, and reflect. Through these coordinated activities, learners align goals, strategies, motivation, and interpretations across time and context, with different models emphasizing distinct aspects of regulation rather than competing accounts of it. SRL is not a fixed trait; it is the mechanism through which agency develops and expertise takes shape.
As AI enters educational and workplace environments, its influence must be interpreted through this architecture. Regardless of modality, including feedback, explanation, recommendation, or example, AI necessarily intersects with the cues learners use to notice discrepancies, evaluate progress, and decide how to act. For example, a highlight that draws attention to an oversight engages monitoring by making discrepancies more visible. A suggestion offering a contrasting tactic intersects with learners' strategic knowledge, shaping how they evaluate alternative courses of action. An example that clarifies task structure can influence control decisions and persistence by altering how learners interpret the demands of the task. Even when learners remain the primary regulators of their learning, AI becomes embedded within the conditions under which monitoring, control, and reflection occur.
SRL also clarifies why guidance must remain interpretable and aligned with learner-endorsed goals. As AI begins to share regulatory space, the clarity and timing of its interventions gain new importance. Grounding the analysis in SRL keeps autonomy, interpretability, and developmental growth central, thereby preventing these regulatory processes from being overshadowed by technological capability or performance gains. That anchoring becomes especially important once AI begins participating in learners' regulatory cycles.
1.3 The gap: lack of frameworks for AI as a co-regulator
Despite widespread adoption of AI-supported learning systems, existing research has largely examined system capabilities, performance effects, or personalization strategies rather than how AI participates in the regulatory mechanisms that make learning self-directed (Banihashem et al., 2025, p. 1; p. 13–17). While prior work documents when AI improves outcomes or increases efficiency, it offers limited analytic tools for distinguishing when AI support strengthens learners' monitoring, control, and sensemaking from when it displaces those processes by resolving difficulty prematurely or obscuring rationale. As a result, the field lacks an integrative framework for analyzing AI not only as instructional support, but as a persistent influence on the conditions under which self-regulated learning unfolds.
Existing accounts less clearly specify how AI interventions reshape the regulatory conditions under which learners notice difficulty, interpret feedback, and decide how to act. Guidance that arrives immediately or automatically can clarify goals and reduce uncertainty, yet it can also preempt the decision points that activate monitoring, strategy selection, and reflection. When system-generated explanations, prompts, or recommendations become embedded in the flow of activity, they reshape not only what learners do, but when and how regulatory judgments are made.
These tensions highlight the need for analytic tools capable of distinguishing when AI strengthens learners' interpretive and regulatory capacities and when it erodes them by compressing decision points, obscuring evaluative cues, or redirecting attention away from learners' own sensemaking processes. Existing accounts describe system capabilities and learning effects, but they do not yet offer a principled way to analyze AI as an influence on regulation itself.
Addressing this gap requires a framework that conceptualizes AI as a co-regulatory influence: a system that participates in shaping the cues, timing, and interpretive conditions under which SRL unfolds. Unlike traditional instructional scaffolding, which typically targets discrete tasks or phases of performance, this framing emphasizes AI's role in structuring the ongoing regulatory conditions under which monitoring, control, and standards-based evaluations unfold. Here, co-regulation as a relational process—understood as the interactional shaping of regulatory conditions between learner and system, rather than as an affective or socio-emotional dynamic—does not imply parity with human judgment, but acknowledges that AI systems increasingly shape the explanations, prompts, and decision points learners rely on by configuring conditions and evaluative cues. Such a framework must clarify how timing, transparency, interpretability, and alignment with learner-endorsed goals determine whether AI interventions support or distort regulatory development. Rather than treating AI primarily as a source of adaptation and performance-oriented support (e.g., Du Plooy et al., 2024; Strielkowski et al., 2024), the framework developed here focuses on how AI structures the conditions under which learners monitor, interpret, and adjust their own activity.
1.4 Contribution and manuscript roadmap
Building on this conceptualization of AI as a co-regulatory influence, this manuscript offers a conceptual framework for understanding how AI participates in regulatory processes without replacing them. Grounded in SRL theory and anchored in Winne and Hadwin's COPES architecture, the framework adopts a relational design orientation by treating AI not as a performance optimizer but as a system that shapes the regulatory conditions learners must interpret and navigate. In this sense, AI's value lies less in resolving difficulty or accelerating performance, and more in supporting the interpretive and decision-making work through which agency and expertise develop. The contribution is therefore process-level rather than technical: a unified model and design-oriented framework for evaluating AI not only by what it produces, but by how it structures learner agency, interpretive engagement, and the regulatory terrain over which learning unfolds.
Operationally, this contribution takes the form of a unified model of AI as a co-regulator within self-regulated learning (SRL), grounded in Winne and Hadwin's COPES architecture and centered on productive metacognitive friction as a mechanism for sustaining learner-driven regulation. From this model, the article derives a set of design principles specifying how AI can shape regulatory conditions while preserving learner agency, interpretive engagement, and developmental coherence over time.
Three guiding questions orient the analysis:
Under what conditions can AI participate in COPES-based regulatory cycles as a co-regulatory influence without displacing learner-driven regulation?
How might productive metacognitive friction be understood as operating within AI-mediated environments to sustain monitoring, control, and adaptive learning over time?
What conceptual design principles and observable indicators can be articulated to guide and evaluate AI-supported co-regulation while preserving learner agency and developmental trajectories?
The manuscript proceeds as follows:
Section 2situates AI within SRL theory, followed by an introduction of productive metacognitive friction in
Section 3.
Sections 4and 5 develop the framework's design principles and evaluation criteria.
Section 6examines implications across contexts, and
Section 7discusses the framework's contribution, boundaries, and emerging research questions. The manuscript concludes in
Section 8with a call for collaborative research and design.
Although grounded in educational SRL theory and instructional design research, the framework's relevance extends to workplace learning contexts in which learners face high demands for self-regulation and AI-mediated judgment. Across these contexts, the manuscript traces how AI-mediated environments can be structured to support the mechanisms through which learners become reflective, agentic, and self-directed participants in their learning.
2 Situating AI in SRL theory
A meaningful analysis of AI's role in learning must be grounded in the theory that explains how learners guide, monitor, and adjust their thinking: self-regulated learning (SRL). SRL has long provided a robust account of how individuals plan, generate and select strategies, interpret feedback, and revise their approach over time, with complementary emphases on temporal phases (Zimmerman, 2000), motivational regulation (Pintrich et al., 1991), and process-level operations (Winne and Hadwin, 1998, 2008). These processes are not supplemental to learning; they are the mechanisms through which understanding develops and expertise takes shape. Because AI now appears in many of the moments where monitoring and control unfold, its influence must be interpreted through this existing regulatory architecture. AI does not enter a void; it enters a cycle shaped by metacognition, motivation, strategic knowledge, and contextual interpretation. This relational framing matters because learners interpret AI signals through prior beliefs, experiences, and contexts. AI influences learning only insofar as learners make meaning of its prompts.
Beginning with SRL theory allows us to evaluate AI not by novelty, efficiency, or technical capability, but by its effects on the processes that make learning self-directed and transferable. Whether AI prompts reflection, clarifies uncertainty, introduces strategic contrasts, or inadvertently resolves challenges too early, these influences register within the SRL cycle by redistributing when and where metacognitive effort is required, sometimes amplifying reflection and sometimes collapsing it through premature fluency (Tankelevitch et al., 2024). For this reason, SRL functions here as the disciplinary spine: the lens through which AI's regulatory role is assessed, and the grounding for the relational design principles developed later. Before examining opportunities and tensions, we outline the regulatory architecture itself and why its integrity must remain central as AI becomes a participant in contemporary learning ecologies. Recent mapping reviews reinforce the need for this theoretical grounding, noting that much of the literature emphasizes performance, efficiency, or personalization outcomes while leaving underexamined how AI participation reshapes planning, monitoring, and control processes over time (Banihashem et al., 2025, p. 13–17, 19–20).
2.1 The core SRL processes: monitoring, control, strategy, motivation
SRL models describe learning as a dynamic sequence of planning, monitoring, control, and reflection. While models differ in emphasis, they converge on the insight that effective learning depends on noticing discrepancies, selecting strategies purposefully, and sustaining engagement through uncertainty.
Monitoring anchors this cycle. Learners track progress, evaluate strategy effectiveness, and detect mismatches between expectations and outcomes. In the COPES architecture (Winne and Hadwin, 1998, 2008), monitoring is a continuous comparison between task conditions, one's actions, emerging products, and standards for quality. At its core, monitoring involves detection: making sense of cues that signal confusion, insight, or the need for adjustment.
Control refers to the adjustments learners make when those cues indicate a discrepancy. It is the response side of regulation: switching strategies, revising goals, seeking information, or reallocating effort. Control is where metacognitive judgments translate into action, and its quality depends on both strategic knowledge and motivational resources.
Strategic knowledge determines the quality and meaningfulness of these adjustments. This includes awareness of available strategies, understanding when and why each is useful, and the ability to choose among them based on task demands. As learners refine their strategic repertoire through experience and reflection, regulation becomes more adaptive.
Motivation supports all of these processes. Beliefs about competence, value, and interest shape how learners interpret difficulty and whether they persist or disengage. When motivation frames challenge as meaningful, learners treat discrepancies as diagnostic rather than threatening, enabling the sustained reflection that builds expertise.
These processes are intertwined: monitoring without control produces awareness without adjustment, control without strategic knowledge leads to superficial switching, and strategic knowledge without motivation remains inert. Understanding the architecture of SRL at this level of granularity is essential for evaluating how AI participates in learning. AI does not perform regulation, but it reshapes the conditions under which the core SRL processes occur. Although SRL theory formalizes these regulatory processes in different ways, the analysis that follows adopts Winne and Hadwin's COPES architecture as its primary operational frame to make these dynamics explicit (Tinajero et al., 2024).
Within this framework, self-regulated learning is modeled through recursive relations among conditions, operations, products, evaluations, and standards. COPES specifies how learning is regulated through these relations, providing a process-level account of how discrepancies are detected and acted upon. This architecture is used here to interpret how AI reconfigures the conditions, cues, and evaluative signals that learners rely on during monitoring and control. Pintrich et al. (1991) further elaborate how learners interpret standards, appraise value, and decide whether discrepancies warrant effort, persistence, or withdrawal, situating these processes within COPES' evaluative and standards-based components rather than as a parallel regulatory model. Complementary SRL models, including Zimmerman's phase-based account of forethought, performance, and reflection, are drawn on to illuminate temporal structure and agentic dynamics within this process, rather than as alternative operational accounts of regulation. This role distinction allows SRL theory to remain theoretically plural while preserving a single, explicit process model for analysis.
Because this manuscript draws on multiple self-regulated learning traditions for distinct analytic purposes, Table 1 clarifies the functional role and analytic limits of each framework as used here. Figure 1 then shows how AI enters this architecture by shaping task conditions and evaluative signals while remaining external to the learner's regulatory loop.
Table 1
| Framework | Functional role in this manuscript | Analytic scope within this framework |
|---|---|---|
| Winne and Hadwin — COPES architecture (1998, 2008) | Provides the process-level regulatory architecture for this manuscript, specifying how conditions, operations, products, evaluations, and standards interact through monitoring and control. Used to analyze how AI reshapes cues, evaluative signals, and regulatory conditions without performing regulation itself. | Focuses on process-level regulatory architecture; motivational content, social context, and learner values are addressed through complementary frameworks, and optimal strategies or outcomes are not prescribed. |
| Zimmerman — Phase-based SRL model (2000) | Used to illuminate temporal structure and agentic dynamics across forethought, performance, and reflection, helping situate where regulatory decisions occur over time. | Provides temporal structure and agentic phases; operational mechanisms of regulation and AI interaction are specified through the COPES architecture. |
| Pintrich — Motivational regulation and control beliefs (1991) | Used to explain how learners appraise standards, value, and control, shaping whether discrepancies trigger effort, persistence, or withdrawal within COPES’ evaluative processes. | Clarifies motivational appraisal within evaluative processes; monitoring and control mechanisms are specified through COPES rather than as a standalone SRL cycle. |
| Self-Determination Theory (Ryan and Deci, 2000) | Used to clarify conditions under which agency, autonomy, and engagement are supported, particularly in relation to personalization and learner endorsement of goals. | Clarifies conditions supporting autonomy and engagement; regulatory processes, monitoring, and strategy selection are modeled through COPES. |
| Cognitive Load Theory (Sweller et al., 1998) | Used to distinguish which forms of effort should be reduced (extraneous load) and which must be preserved (germane load) when designing scaffold fading that supports regulatory development. | Distinguishes forms of cognitive effort relevant to scaffold design; self-regulatory monitoring, control processes, and motivational appraisal are addressed through complementary SRL frameworks. |
Functional role and limits of SRL frameworks as used in this manuscript.
This table specifies the functional role and analytic scope of each SRL framework as used in this manuscript, clarifying how each contributes to the integrated model.
Figure 1
2.2 How AI intersects with (but does not replace) SRL processes
As AI becomes integrated into learning environments, it interacts with regulatory processes that are already distributed across learners, tools, and contexts rather than contained solely within the individual (Järvelä et al., 2023). These intersections do not create new phases of SRL or substitute for the learner's role. Instead, AI reshapes the conditions under which SRL unfolds by shaping cues, adding representations, or reshaping how discrepancies appear.
For monitoring, AI can make mismatches more visible by highlighting inconsistencies in a solution draft, surfacing reasoning patterns, or prompting learners to articulate their understanding. When explanations clarify why a recommendation is made, learners gain a stronger basis for comparing expected and actual understanding, a pattern consistent with learning sciences work showing how contrasts and worked examples support regulatory decision-making (Koedinger et al., 2012).
For control, AI can enrich the decision space by offering contrasts among strategies or illustrating the consequences of particular choices. Such information supports sensemaking without executing decisions on the learner's behalf.
For strategic knowledge, AI can surface unfamiliar strategies through examples, counterexamples, or demonstrations of alternative approaches. These broaden the learner's repertoire while preserving the need to choose, evaluate, and apply strategies independently.
For motivation, AI can reduce interpretive uncertainty, clarifying task structures, pacing complexity, or connecting activities to learner goals. These influences support engagement by making learning more navigable.
Across these intersections, AI's influence remains conditional and interpretive: it shapes what learners attend to and how they interpret cues, but it does not perform SRL. This participatory, non-substitutional role provides the basis for assessing when AI extends regulatory processes and when it risks interfering with them.
2.3 Opportunities: when AI supports SRL processes
When deliberately aligned with SRL theory and scaffolded to preserve learner interpretation, AI can strengthen the visibility, accessibility, and interpretability of regulatory cues. Such effects have been observed under specific, well-scaffolded conditions rather than as a general property of AI support (Khalil et al., 2024). These opportunities do not reshape SRL's structure; they improve the precision and accessibility of the signals through which regulation is enacted.
Monitoring is strengthened when AI increases the salience of discrepancies and supports more calibrated comparison between current performance and standards. By making regulatory cues more explicit while leaving interpretation to the learner, AI can support more accurate metacognitive judgments rather than replacing them.
Control is supported when AI expands the clarity of available options and the anticipated consequences of different choices. In these cases, learners retain decisional authority while benefiting from a more transparent decision space, enabling adjustments that are deliberate rather than reactive.
Strategic knowledge deepens when AI contributes to a broader and more conditional understanding of when and why particular strategies are effective. Rather than prescribing actions, such supports can enhance transfer by strengthening learners' ability to select and adapt strategies across contexts.
Motivation benefits when AI reduces unnecessary ambiguity while preserving productive challenge. When task demands and standards are clearer, learners are better positioned to interpret difficulty as informative rather than discouraging, supporting sustained engagement.
Across these opportunities, the shared outcome is improved regulatory quality: cues are more interpretable, comparisons more calibrated, decisions more deliberate, and persistence more informed, all while preserving the learner's role in directing regulation.
2.4 Tensions: when AI risks undermining SRL processes
AI can also disrupt SRL when system behavior intersects poorly with learners' interpretive responsibilities. These tensions are not presented as symmetric counterparts to the opportunities described above, but as design-contingent risks that emerge when system dynamics overtake learner interpretation.
Over-structuring removes opportunities for learners to monitor progress, notice discrepancies, or make strategic decisions.
Over-scaffolding encourages reflexive acceptance of system suggestions, limiting experimentation and weakening strategic comparisons.
Over-automation displaces control processes when AI adjusts difficulty or preempts decisions without explanation, reducing opportunities to recognize or interpret discrepancies.
Poor timing disrupts regulation when suggestions interrupt focus or arrive before learners form an approach.
Opacity diverts cognitive resources toward understanding the system rather than the task when rationale is unclear or inconsistent.
Goal drift occurs when system cues implicitly redefine success by prioritizing efficiency or correctness in ways that conflict with learner intentions.
Across these tensions, a consistent pattern mirrors the earlier opportunities: when system behavior shifts too far, it alters the cues and conditions that SRL depends on.
These opportunities and tensions show that AI's influence unfolds within SRL's regulatory architecture by reshaping how learners encounter and work through discrepancies. This matters because SRL development depends on encountering, interpreting, and working through productive metacognitive friction: the discrepancies that make regulation necessary. When AI enhances those moments, it strengthens SRL; when it obscures them or resolves uncertainty prematurely, regulation becomes thinner and less learner-directed, redistributing metacognitive effort away from learners' own monitoring and control processes (Tankelevitch et al., 2024).
Understanding these dynamics requires examining how AI reshapes the cues and experiences that activate monitoring and control, especially in moments of uncertainty or mismatch. These moments have long been central to SRL theory, yet AI changes how they arise and how learners work through them. Section 3 turns to the construct that sits at the heart of this dynamic: productive metacognitive friction, the mechanism through which regulatory opportunity becomes regulatory development.
3 Productive metacognitive friction in SRL
SRL depends on moments when learners pause, evaluate progress, and adjust their approach. These moments of cognitive and motivational tension occur when learners detect a divergence between intended goals and emerging outcomes. We refer to these discrepancy signals as productive metacognitive friction: a regulatory dynamic increasingly reshaped within AI-mediated learning systems (Järvelä et al., 2023; Molenaar, 2022). Productive metacognitive friction is not introduced as a new SRL construct, but as an analytic lens for examining how existing regulatory discrepancies are preserved, interpreted, or prematurely resolved by AI support. Rather than constituting an obstacle, this friction activates the monitoring and control processes through which strategy use, persistence, and adaptive expertise develop.
3.1 Friction in SRL theory
In SRL theory, friction functions as a mechanism that moves learners from automatic execution to deliberate regulation. Models of self-regulation describe learning as a cycle in which individuals set goals, monitor progress, and revise strategies when expectations and outcomes diverge. This dynamic is articulated temporally in phase-based accounts (Zimmerman, 2000) and operationally through discrepancy detection mechanisms (Winne and Hadwin, 1998, 2008). These discrepancies create tension points of confusion, reevaluation, and reframing that define productive metacognitive friction. Productive metacognitive friction is not synonymous with task difficulty or cognitive load; it refers specifically to moments when discrepancies become interpretable signals that activate monitoring, control, and strategic choice within the self-regulated learning cycle.
Monitoring is the engine of this process. Learners continually compare task conditions, operations, and products against internal standards. In Winne and Hadwin's COPES architecture, these comparisons are trace-based and ongoing; mismatches slow execution and prompt explanation-seeking, helping learners clarify preferences, identify limitations, and explore alternative strategies. Within this architecture, this friction corresponds to moments when conditions, operations, or emerging products fail to meet internal standards, triggering evaluation and control rather than automatic task continuation. Through these micro-corrections, friction becomes a source of metacognitive insight rather than impediment.
Friction also carries a motivational dimension. Learners' beliefs about competence, value, and interests shape whether they interpret difficulty as a signal to persist or withdraw (Pintrich et al., 1991; Ryan and Deci, 2000). When difficulty is interpretable, friction strengthens agency and supports strategic flexibility. When it becomes opaque or overwhelming, friction can invert into unproductive struggle that diverts attention toward confusion rather than regulation.
SRL therefore frames friction as both essential and delicate: it must be present, visible, and interpretable for regulation to develop. These dynamics become especially consequential as AI systems begin shaping how, when, and whether discrepancies become interpretable.
3.2 How AI alters the conditions of friction
When AI enters the learning environment, it reshapes the timing and visibility of the discrepancies that activate regulation by mediating when and how monitoring cues become available (Järvelä et al., 2023). Because SRL depends on learners detecting discrepancies between goals, strategies, and outcomes, any system that anticipates or restructures these moments influences the learner's regulatory trajectory.
AI can enrich friction by making difficulty more legible (Khalil et al., 2024). Prompts that surface reasoning gaps, ask learners to articulate goals, or contrast alternative strategies activate monitoring. Explanations that clarify why a suggestion appears provide a clearer basis for evaluating thinking (Kumar et al., 2024).
AI can also diminish friction by prioritizing smoothness or efficiency at the interactional level (Tankelevitch et al., 2024). Automated hints, rapid solutions, or silent task adjustments can erase the discrepancies learners must detect to regulate their learning (Lim et al., 2023). Progress may feel fluent, but regulatory processes thin in the absence of legible tension.
Conversely, AI can introduce unproductive friction when system behavior is opaque or poorly timed (Lim et al., 2023). Unexplained shifts in difficulty, interruptions that break flow, or context-agnostic recommendations create regulatory confusion rather than insight. Friction persists but loses legibility, making it less actionable for SRL (Zhai et al., 2024).
Thus, the question is not whether friction exists in AI-mediated environments, but what form it takes and whether that friction strengthens or disrupts core regulatory processes.
3.3 Productive vs. unproductive friction in AI-mediated learning
Distinguishing productive from unproductive friction is essential once AI begins shaping regulatory conditions. Productive friction arises when difficulty is interpretable, well-timed, and aligned with learner-endorsed goals. It alerts learners to mismatches, prompts monitoring, and supports the micro-decisions that define SRL: persisting in effort, revising strategies, seeking clarification, or reconsidering an approach. Transparent prompts that illuminate a discrepancy or surface reasoning often create this form of friction.
Unproductive friction emerges when difficulty becomes opaque or no longer matches the learner's resources for working through it. Unexplained recommendations or adaptive moves can redirect attention from understanding the task to deciphering system behavior. Unproductive friction can also arise through the suppression of necessary challenge. When learners advance without encountering the discrepancies that activate regulation, an unproductive ease emerges that undermines development. Breakdowns arise not from task demands themselves, but from how difficulty is surfaced, timed, or resolved within the learner–AI interaction.
The distinction hinges on interactional conditions, not inherent difficulty (Molenaar, 2022). Productive friction depends on transparency, timing, and alignment, whereas unproductive friction emerges when these drift. This distinction clarifies why AI design must sustain interpretability and protect learner agency as systems increasingly influence the terrain of difficulty.
3.4 Friction as relational co-regulation
Once AI becomes embedded within a learning ecology, the interpretation of friction becomes relationally co-regulated through interactions among learners and AI systems (Järvelä et al., 2023). Friction no longer arises solely from internal monitoring or task structure, but is also shaped by the timing, transparency, and alignment of system interventions. These relational dynamics determine whether learners remain positioned for productive reflection and strategic revision or become overwhelmed, disengaged, or over-assisted.
Transparent explanations help learners interpret discrepancies, understand why they matter, and use them to guide strategy choices. Attuned prompts calibrated to pace, reasoning trajectory, and goals can provide context-sensitive support that preserves learner choice and sustains the learner's role as agent within the regulatory process.
Misalignment disrupts this calibration. Interventions that arrive too early, too forcefully, or without visible rationale shift attention from sensemaking to system-guessing. Here, friction becomes unproductive when the relational cues required for interpretation are absent, even if task difficulty remains unchanged.
Viewing friction as relationally co-regulated highlights why SRL cannot be understood as purely cognitive in AI-mediated contexts. Interactional integrity, defined by whether AI behavior is interpretable, goal-aligned, and choice-preserving, determines whether AI amplifies reflective engagement or displaces regulatory effort.
3.5 The tension: automation vs. autonomy
As AI participates more deeply in regulatory cycles, a central tension emerges: should difficulty be smoothed through automation or preserved to maintain the learner's role as an active regulator?
Automation can be helpful when it removes noise or clarifies task structure. But when it resolves discrepancies before learners notice them, for example by adjusting difficulty or proposing strategies without visible rationale, it displaces monitoring and control and creates an illusion of mastery that obscures regulatory skill (Molenaar, 2022; Zhai et al., 2024).
Autonomy requires learners to encounter interpretable friction. Without attunement, however, poorly timed or opaque system behavior can overwhelm and create confusion rather than insight. Excessive autonomy can burden learners as much as excessive automation can under-engage them.
The task is not choosing between automation and autonomy but calibrating the friction conditions that sustain learner agency. Interventions that are transparent, appropriately paced, and purpose-aligned preserve the conditions under which AI supports but does not overshadow regulation. Under these conditions, learners remain central decision-makers even as AI participates in shaping the regulatory terrain they navigate.
Taken together, these dynamics show that productive metacognitive friction is interactionally shaped and directly influences monitoring, strategy use, and reflective judgment. AI can strengthen, distort, or erase this friction depending on how it participates in the regulatory cycle. Productive friction emerges when learners encounter interpretable discrepancies and retain authority over how to respond. Unproductive friction arises when system behavior obscures, misaligns, or prematurely resolves those discrepancies. Because productive friction drives SRL development, AI systems must preserve its constructive role. In response, Section 4 outlines principles for designing AI as a co-regulator that sustains rather than substitutes the conditions under which self-regulated learning develops.
4 AI-informed theoretical design principles for SRL
In this manuscript, AI is treated as a co-regulatory influence: a system that shapes the cues, timing, and interpretive conditions under which self-regulated learning unfolds, without performing monitoring, control, or evaluation on the learner's behalf. AI becomes consequential the moment it enters the learner's regulatory cycle. Because AI can strengthen or distort the conditions of productive friction, the challenge becomes designing interventions that support rather than override the regulatory work learners must do. As shown in Section 3, the opportunities and tensions in AI-supported SRL arise not from the technology alone, but from the shifting interactional dynamics between learner, system, and context. Preserving productive metacognitive friction, sustaining learner agency, and protecting the coherence of the learning arc therefore require principled design rather than incidental or efficiency-driven support.
Following the SRL-first logic established earlier, this section introduces five theoretical design principles that function as levers for shaping co-regulation. Each principle links a theoretical foundation to concrete design moves, provides a brief practice anchor, and identifies evaluation indicators that keep the design accountable to SRL processes. Together, they clarify how AI should interact in ways that remain responsive, interpretable, and aligned with learners' regulatory capacities so that its presence deepens the regulatory processes through which SRL develops rather than redirecting them.
4.1 Principle 1 — preserve learner agency
4.1.1 Theory link
Across SRL theory, agency is not an optional add-on; it is the generative engine that drives forethought, monitoring, and strategy selection. Zimmerman's social-cognitive account emphasizes that self-efficacy and perceived control shape every phase of the SRL cycle, from goal-setting to reflection (Zimmerman, 2000). Pintrich et al. (1991) similarly position control beliefs as central to motivational regulation, showing that how learners interpret their influence over outcomes shapes whether they initiate, sustain, or abandon strategies. Self-Determination Theory further clarifies when agency is experienced as volitional rather than compliant: autonomy, alongside competence and relatedness, functions as a prerequisite for meaningful engagement rather than as a mechanism of regulation itself (Ryan and Deci, 2000). Within this constellation, agency functions as the structural precondition under which self-regulation unfolds, shaping how learners interpret cues and direct their regulatory acts.
AI's growing presence in learning environments introduces new constraints on these mechanisms. When adaptive systems over-specify pathways, automate strategy selections, or obscure the rationale behind their guidance, they risk diminishing the decision points through which SRL develops (Babayev, 2025). Over time, this can weaken the processes that sustain regulatory growth. If a system consistently presents a single “next best step”, learners may begin deferring decisions they are fully capable of making, weakening the motivational, metacognitive, and behavioral components that support regulation. Preserving agency therefore serves as the primary design anchor for the remaining principles.
4.1.2 Design description
Preserving learner agency in AI-mediated environments requires calibrated choice: systems should place meaningful decisions in learners' hands at moments when those decisions build regulatory competence. This includes making scaffold intensity adjustable rather than fixed, enabling learners to modulate the amount, timing, and form of support. Instead of a single adaptive pathway, agency-preserving design offers clearly explained options such as light metacognitive prompts, targeted strategy suggestions, or deeper structural feedback. Each mode includes a concise explanation of its trade-offs so learners can make informed choices rather than defaulting to passive compliance.
Visibility of control is equally essential. Interfaces should make it clear that learners are steering their own process by surfacing options to defer prompts, adjust frequency, or pause assistance altogether. These actions should be framed as legitimate SRL acts rather than as “turning the AI off”: deciding what kind of support aligns with one's goals, workload, and confidence. The choice architecture matters here. Nudges are inevitable, but they should remain transparent, autonomy-supportive, and reversible. In SRL contexts, this means prompts structured to preserve learner discretion (“Would you like to consider an alternative approach?”), not directives that prescribe a course of action (“You should do X now”). Attuned timing and tone position AI as a co-regulator of conditions, not a silent decision-maker or an over-directive tutor.
4.1.3 Practice anchor
Consider a writing coach designed to support students drafting a research paper. A conventional adaptive system may intervene through unsolicited structural feedback or rewriting suggestions. A relationally designed co-regulator foregrounds agency at the beginning of the interaction. When learners begin drafting, the system offers three transparent modes of support:
Light metacognitive prompts (“What part of your argument do you want to clarify next?”)
Focused strategy cues such as outlining or cohesion checks
Deeper structural guidance on argument flow or evidence integration
Each mode includes a brief explanation of what choosing it entails. A learner might start with light prompts to build momentum, shift into structural guidance during a revision phase, and scale support back again when approaching synthesis. At any point, the learner can pause, adjust intensity, or decline assistance. The system's role is to sustain momentum while protecting judgment, not to optimize the draft on the learner's behalf.
4.1.4 Evaluation note
Translating this design into practice requires methods capable of detecting where agency is strengthened or eroded. Evaluation therefore depends on examining subjective experience, behavioral patterns, and developmental markers. Perceived autonomy and self-efficacy scales capture learners' felt sense of control (Bandura, 1997; Ryan and Deci, 2000). Micro-analytic SRL indicators, such as self-initiated strategy shifts, goal-setting edits, or voluntary reductions in scaffold intensity, provide evidence of active regulation rather than passive drift.
Trace data reveal deeper patterns: whether learners defer prompts, opt into particular scaffold modes, or increasingly take initiative without external cues. A relational indicator such as initiative drift, in which learners gradually rely on the system for decisions they previously made independently, can signal erosion of agency. Conversely, decreasing reliance on high-intensity support, paired with stable or improving performance, may indicate healthy SRL development rather than dependency on the system. Together, these indicators preserve agency as a measurable dimension of relationally aligned AI design.
4.2 Principle 2 — design for transparency and explainability
4.2.1 Theory link
Across SRL frameworks, effective monitoring and control depend on accurate mental models: learners must understand what they are doing, why they are doing it, and how strategies connect to outcomes. Winne and Hadwin's COPES architecture makes this explicit: regulation depends on learners comparing Operations and Products to internal Standards (Winne and Hadwin, 1998, 2008). When AI systems are introduced into this loop, they become part of the Conditions and Operations that shape these comparisons. If adaptivity is opaque or feedback is presented without rationale, learners cannot interpret why a prompt appears or how it should inform their next move. Explainability, defined by Doshi-Velez and Kim (2017) as enabling users to form a correct mental model of system behavior, is therefore fundamental to SRL integrity. Miller (2019) further highlights that explanations function as social narratives: learners infer meaning from stated reasons, apparent intentions, and anticipated consequences rather than from raw mechanics.
In SRL contexts, learners must use explanations to evaluate, refine, or reject system suggestions as part of metacognitive regulation. When they cannot infer how the system processed their input or why it generated a recommendation, they lose the ability to question alternatives or regulate their decisions. Transparency is therefore not a technical accessory but a relational and pedagogical requirement.
4.2.2 Design description
Transparency in relationally aligned AI involves surfacing forms of explainability that directly support SRL: what the system detected, how it processed those signals, and why it generated a particular suggestion. Explanations must remain concise and narrative, clear enough to guide monitoring and control without creating cognitive overload.
Adaptation logs offer one effective pattern: brief, learner-facing records that note decision points (“Prompt frequency increased after several declines”, “This recommendation builds on your earlier goal-setting inputs”). These logs reveal reasoning patterns in a way that helps learners evaluate, challenge, or override system suggestions based on their own interpretations and judgments.
Transparency also includes previewing the likely consequences of different support options. When learners can see how accepting a scaffold or delaying a recommendation might shape their trajectory, they maintain authority over regulatory decisions rather than relinquishing control to hidden automation. In relational terms, transparency provides sufficient insight to sustain trust and accurate monitoring while keeping the learner, rather than the system, at the center of decision-making.
4.2.3 Practice anchor
A workplace learning platform offers a clear illustration. Instead of presenting a list of skill-development recommendations without explanation, a relationally aligned system attaches brief rationales (“This builds on the targets you set for this quarter”, “This connects to last week's stakeholder analysis work”) and provides a compact adaptation log showing how prior choices shape current guidance.
In practice, this enables learners to judge alignment (monitoring), assess timing (strategic control), and determine whether a suggestion fits their workflow (metacognitive evaluation). When reasoning is visible and contestable, recommendations function as constructive options within a learner-directed regulatory space rather than as opaque directives.
4.2.4 Evaluation note
Evaluating transparency requires assessing whether learners use explanations to guide monitoring and control. Indicators include explanation usefulness (clarity, relevance, interpretability) and trust calibration, defined as learners' ability to modulate trust rather than uniformly accept or reject guidance.
Trace data can reveal whether learners consult logs or rationale pop-ups and whether explanation use precedes strategic decisions such as revising goals, refining strategies, or declining misaligned suggestions. This provides observable evidence of how explanations inform monitoring and control in real time.
A key relational indicator is learners' ability to articulate how explanations shape their choices, as in-the-moment articulation has been shown to surface monitoring- and control-relevant processes within the SRL cycle rather than post hoc reflection (Borchers et al., 2024). Error-correction behaviors, such as adjusting trajectories when explanations reveal mismatches, further signal effective transparency. Together, these measures position transparency as an empirically assessable dimension of SRL support rather than mere documentation of system behavior.
4.3 Principle 3 — enable scaffold fading with responsiveness
4.3.1 Theory link
Fading is a well-established mechanism in learning science: scaffolds are temporary structures that recede as competence grows. Classic work by Wood et al. (1976) frames scaffolding as a dynamic interplay in which support increases when learners encounter difficulty and decreases as fluency emerges. Cognitive Load Theory (CLT) reinforces this distinction by specifying which forms of effort should be reduced (extraneous load) and which must be preserved to build understanding (germane load), as formalized in foundational CLT work (Sweller et al., 1998). Here, CLT is used to clarify effort boundaries, particularly when support suppresses necessary cognitive work, not to prescribe instructional sequencing or adaptive control. When scaffolds persist too long or remain overly detailed, they can suppress productive effort and distort perceptions of mastery.
The expertise-reversal effect adds nuance: supports that are effective for novices can become counterproductive as learners advance (Kalyuga, 2007). Failure to adjust scaffold intensity risks keeping learners in novice patterns. In SRL terms, fading strengthens performance-phase monitoring and control by supporting the shift from externally guided execution to self-directed strategy use. A relational design lens frames fading as calibrated co-regulation: timely, attuned reductions in support that signal growing capacity while preserving the learner's central role (Lim et al., 2023).
4.3.2 Design description
Effective scaffold fading in AI-mediated environments must be intentional, responsive, and legible. A relationally aligned co-regulator modulates support based on indicators such as demonstrated mastery, repeated strategic success, decreased need for reminders, or learner-initiated adjustments. This reflects fading with responsiveness by aligning support with evolving learner needs.
This process is operationalized through adaptive fade curves: structured patterns in which scaffold intensity decreases as competence becomes observable but can temporarily increase when learners encounter new demands. These curves are dynamic rather than linear, adjusting to fluctuations in understanding, task novelty, and learner-initiated preferences.
Legibility is essential. Learners should understand why scaffolds are receding (“You have applied this strategy independently several times”) and retain options to increase support when needed (“Would you like a hint or strategy cue?”). Transparency helps ensure that fading is interpreted as recognition instead of abandonment or unexplained system behavior.
Relationally framed fading communicates that learner strategies are increasingly capable of leading, while support remains available for recalibration. This reinforces the view that learners remain active meaning-makers, not passive recipients of optimization.
4.3.3 Practice anchor
A coding-support AI offers a concrete illustration. Early in a learner's journey, the system can provide annotated examples, step-by-step debugging guidance, and targeted syntax cues. As fluency increases, support shifts toward brief strategic hints or reflective prompts such as “What pattern might reduce repetition here?”
Fading remains reversible. When learners encounter particularly complex or conceptually novel material, the system can reintroduce higher-intensity scaffolds, but only in autonomy-supportive forms (“Would you prefer a conceptual explanation, a guided example, or targeted hints?”). This creates a pattern of support that adjusts to developmental fluctuations while maintaining the learner's authority over the trajectory.
4.3.4 Evaluation note
Because fading is developmental, evaluation focuses on whether learners grow more independent, not solely on whether the system provides fewer prompts. Trace indicators such as declining hint requests, reduced use of high-intensity scaffolds, and increasing learner-initiated strategy shifts are signs of healthy development. A productive fading pattern shows reduced reliance on deep support alongside stable or improved performance (Siadaty et al., 2016).
Micro-analytic SRL indicators, such as independent planning, spontaneous monitoring, or voluntary transitions to lighter scaffold modes, provide additional evidence (Cleary and Callan, 2017). Temporary increases in scaffold use followed by rapid returns to independence, sometimes described as relapse–recovery patterns, suggest attuned fading. Persistent dependence or avoidance of autonomous attempts may signal over-scaffolding. These indicators preserve fading's accountability to its purpose: supporting durable, transferable regulatory competence.
4.4 Principle 4 — adapt responsively, with integrity
4.4.1 Theory link
Within SRL theory, adaptation is a standards-based regulatory act: learners compare ongoing performance to internal benchmarks and adjust strategies accordingly (Winne and Hadwin, 1998, 2008). The COPES architecture (Conditions, Operations, Products, Evaluations, Standards) makes this explicit by showing that regulatory decisions hinge on the standards that define what “good” performance means in context. Once AI is embedded within this architecture, it begins shaping the Conditions through which learners interpret their actions and outcomes. System adaptivity can therefore strengthen alignment with learning goals, but it can also shift that alignment (Lim et al., 2023).
This is where integrity matters. Adaptation driven by performance, novelty, or engagement optimization can bypass essential practice or reduce productive difficulty prematurely. Such shortcuts can distort learning trajectories by steering learners toward system-preferred pathways and collapsing decision points that learners need in order to regulate their own activity. Scholars in ethical AI (Holmes et al., 2021) and educational governance (Williamson and Eynon, 2020) caution that algorithmic systems can reconfigure control dynamics in educational environments. In SRL contexts, that risk appears when key regulatory decisions begin to move away from learners. Integrity therefore becomes a relational requirement: each adaptive move should remain aligned with learning goals, ethical boundaries, and the regulatory processes on which SRL depends.
4.4.2 Design description
Adapting responsively with integrity requires AI systems to adjust to learner needs while preserving the larger arc of competence development. Responsiveness alone is insufficient; adaptation must remain aligned with purpose.
Integrity-aligned adaptation rests on three commitments:
Alignment with learning goals
Every adaptive move should map directly to a learning objective, not merely to predicted success or short-term efficiency. If a learner struggles with planning, the AI can prompt reflective goal-setting, but it should not reorganize tasks in ways that bypass planning altogether.
Preservation of productive difficulty
The system should support learners in working through difficulty rather than smoothing it away. Productive metacognitive friction must remain intact so monitoring, evaluation, and control processes can occur.
Transparency of adaptive logic
When the AI adjusts task difficulty, prompt timing, or recommendation order, it should explain that move in concise, human-readable terms (“I reordered these steps to support your planning goal”). Making adaptations visible and contestable sustains interpretability and preserves learner authority within the regulatory process.
Seen this way, adaptation functions as attuned, purposeful co-regulation rather than silent correction.
4.4.3 Practice anchor
Consider an AI-supported project-management tool used in workplace upskilling. When deadlines shift, a conventional adaptive system might reorganize tasks to maintain efficiency, bypassing opportunities for strategic planning or communication. An integrity-aligned system would respond differently. When new constraints are detected, it surfaces clearly differentiated choices grounded in the learner's developmental goals:
Option A preserves the original plan and supports planning with targeted prompts.
Option B simplifies the workflow, reducing cognitive load but limiting opportunities to practice stakeholder negotiation.
Option C adds a reflective checkpoint, strengthening metacognitive monitoring.
By showing how each option reshapes the regulatory pathway toward planning, simplification, or reflection, the system supports an informed decision. If the learner selects a more demanding option, the AI can offer attuned scaffolds. If the learner selects a lighter path because of workload constraints, the system treats that choice as legitimate rather than deficient. Responsiveness here preserves both contextual fit and developmental integrity.
4.4.4 Evaluation note
Evaluating integrity-aligned adaptation requires examining whether learners maintain a coherent SRL trajectory, not merely whether performance improves.
The evaluation can mirror the three design commitments:
Goal alignment indicators
Assess whether adaptive interventions consistently map to intended learning objectives. Skill-path audits can verify that learners progress through the planned competence sequence rather than shifting toward system-driven shortcuts.
Productive difficulty indicators
Trace data should be used to examine whether challenge levels remain appropriate over time. Difficulty should not be reduced prematurely or without evidence that learners are ready for lighter support. Evaluation should also examine whether learners continue to engage in planning, monitoring, and revision rather than outsourcing these processes to the system (Weijers et al., 2023).
Transparency and authority indicators
Track whether learners inspect rationale pop-ups, review adaptation logs, or select among adaptive options (Mills and Sætra, 2022). Felt-agency measures can then assess whether learners experience adaptations as supportive and aligned with their goals while maintaining appropriately calibrated trust in system explanations and outputs (Conijn et al., 2023).
Together, these indicators help ensure that adaptation strengthens rather than supplants the regulatory processes central to SRL, making integrity a measurable dimension of principled AI design.
4.5 Principle 5 — balance personalization with purpose
4.5.1 Theory link
Personalization features prominently in AI-in-education discourse, yet its value for SRL depends on how it shapes monitoring, control, and motivation. Self-Determination Theory clarifies the motivational conditions under which personalization supports self-regulation without redefining its underlying mechanisms: engagement deepens when interests and values are authentically integrated rather than driven solely by novelty or preference (Ryan and Deci, 2000). SRL models likewise emphasize that goals and strategic decisions matter only when internally endorsed; even “personalized” cues can undermine autonomy if they shift learners away from goals they consider meaningful.
Research on intelligent tutoring systems (Koedinger et al., 2012) shows that adaptivity enhances learning when personalization aligns with coherent instructional purpose rather than unexamined system inferences. Mastery-learning approaches in the learning sciences reinforce this: variation in pathway is appropriate, but progression still follows a structured skills trajectory. A relational design lens integrates these insights by framing personalization as purposeful alignment that links learner interests, learning pathways, and system cues. When personalization prioritizes entertainment, convenience, or optimization without instructional grounding, it can shift authorship away from the learner and toward the system.
4.5.2 Design description
Balancing personalization with purpose requires distinguishing relevance from direction. Relevance allows AI to introduce topics, contexts, or examples that resonate with learners; direction refers to the underlying sequence of competencies they are building. Personalization supports SRL when it enriches relevance without altering direction.
A purpose-aligned system supports this by making personalization transparent, bounded, and co-authored. When the AI tailors material to learner interests (“Want sustainability-themed practice examples?”), it simultaneously anchors that choice in a visible skills roadmap (“These still target causal reasoning and argument structure”). This supports engagement while preserving the developmental trajectory.
Crucially, the system must make authorship explicit: which parts of the pathway reflect learner-endorsed goals, which reflect instructor-identified competencies, and which reflect default system logic. Shared authorship protects regulatory authority, enabling learners to adjust emphasis or introduce sub-goals while remaining aligned with long-term purpose. The AI's role is to help learners connect interest with disciplined practice in ways that strengthen identity, volition, and strategic coherence.
4.5.3 Practice anchor
A language-learning AI illustrates this balance. In such a system, learners can choose thematic domains (“Would you prefer environmental science texts or social innovation case studies?”) while mapping each option to the skills roadmap (vocabulary breadth, syntactic complexity, discourse structure, and genre reasoning).
As learners progress, the system introduces reflective prompts (“Which themes help you persist during complex grammar practice?”). If a chosen theme limits exposure to essential structures, the AI proposes aligned alternatives that maintain interest while restoring purpose. Engagement remains meaningful because it directly supports regulatory development within the skills trajectory.
4.5.4 Evaluation note
Evaluating personalization with purpose requires examining both engagement quality and trajectory coherence.
Trace indicators can reveal whether personalized choices advance the skills roadmap rather than drift toward novelty (Weijers et al., 2023). Micro-analytic SRL measures can assess whether learners understand how personalization shapes their monitoring, strategy use, and performance. Self-report metrics can capture perceived alignment (“I see how my chosen themes support my goals”) and felt agency in pathway decisions.
A key relational indicator is whether learners can justify personalization choices in terms of long-term purpose. When learners sustain interest, practice core competencies, and demonstrate purposeful autonomy, personalization functions as intended. These indicators ensure that personalization remains aligned with SRL, supporting relevance while preserving the coherence of the learning arc.
Viewed across these five principles, a coherent model of AI as a co-regulator emerges. This manuscript advances this model as a unified account within self-regulated learning. In this model, AI does not replace or optimize learner regulation, but participates within COPES-based cycles by shaping the conditions under which learners monitor, interpret, and adjust their activity. The five design principles articulated in Section 4 specify how AI can be structured to sustain productive metacognitive friction across regulatory phases, while remaining anchored in the psychological dimensions that underlie effective monitoring and control. In this way, co-regulation is conceptualized not as a discrete intervention, but as a process-aligned influence that operates within learner-authored regulation rather than outside it.
To clarify the integration of COPES, productive metacognitive friction, and AI-mediated design principles, Figure 2 visualizes the COPES-based AI Co-Regulation Model as a structured representation of co-regulation within self-regulated learning.
Figure 2
The model makes visible how AI support operates through the structuring of conditions and evaluative signals, while the regulatory work remains with the learner.
Taken together, the principles reveal a consistent design logic: AI's role in SRL is to support the co-regulation of learning processes with clarity and autonomy. The principles function as theory-grounded mechanisms that shape how learners encounter challenge, choose strategies, interpret feedback, and sustain purposeful progress. Together, they preserve the conditions under which SRL develops: learner agency, transparency, attuned support, integrity-aligned adaptation, and purposeful personalization.
Each principle is structured to permit empirical scrutiny. Their observable indicators offer concrete ways to assess whether AI is strengthening or displacing the self-regulatory mechanisms central to SRL. Section 5 translates this design logic into a measurement framework for empirical research, implementation, and responsible adoption across educational and workplace learning contexts.
5 Linking design to evaluation
Design alone is insufficient. For AI to function as a co-regulator of self-regulated learning, its contributions must be evaluated with the same theoretical care that guides its design. The principles in Section 4 articulate how AI can preserve agency, sustain transparency, support scaffold fading, protect process integrity, and personalize with purpose. Section 5 shifts the focus from design to evaluation by examining how these commitments can be empirically assessed in practice. Because AI participates in shaping the conditions of SRL by introducing new cues, forms of feedback, and moments of productive metacognitive friction, evaluation must account for these relational dynamics rather than treating the system as a neutral layer. This section links design to measurement by mapping each principle to evaluable constructs and identifying the categories researchers and practitioners can use to assess whether AI strengthens or displaces the regulatory processes central to SRL.
5.1 Evaluating co-regulation: a process-based review
Evaluating AI-supported SRL requires aligning measurement with regulatory processes. Instead of outcome-oriented metrics, evaluation must capture evidence of how learners regulate in real time. SRL models describe learning as a cyclical, process-driven activity in which forethought, monitoring, control, and reflection unfold moment by moment; evaluation must therefore center on these processes rather than performance alone. When AI enters the regulatory cycle, it alters what counts as a cue, what counts as feedback, and how learners interpret difficulty. Learners respond to tasks and goals as well as to system explanations, scaffold adjustments, adaptive recommendations, and prompt timing. These emphases align with validated SRL measurement tools, including the Motivated Strategies for Learning Questionnaire (Pintrich et al., 1991), which assesses autonomy, self-efficacy, monitoring, and strategic regulation, all of which are central to evaluating how co-regulation unfolds.
This expanded interaction context introduces new patterns that must be evaluated: how often learners request or decline support, whether they override recommendations, how they interpret explanations, and whether they continue monitoring independently of system cues. These traces help reveal how co-regulation unfolds through behavioral and interactional choices. Micro-analytic approaches that track strategy shifts, goal revisions, help-seeking, and monitoring behaviors are critical because they capture regulatory processes as they occur during task engagement, revealing whether AI is strengthening or substituting the regulatory acts that SRL identifies as central (Cleary and Callan, 2017).
Trace data add granularity, including the timing, frequency, and sequence of learner–AI exchanges, and can be translated into indicators of engagement in SRL processes when interpreted through theory-aligned, validity-constrained measurement rather than treated as direct proxies for self-regulation (Fan et al., 2022; Siadaty et al., 2016). Short, decisive actions can reflect regulatory confidence, while slower or externally driven patterns may reflect weakening regulatory autonomy, interpretations that require contextual and longitudinal analysis rather than standalone inference. The goal of evaluation is to determine whether the learner remains the principal regulator, with AI providing attuned, autonomy-supportive assistance rather than substituting for regulatory decision-making.
5.2 Mapping the five principles to evaluation criteria
Each design principle from Section 4 corresponds to a distinct evaluative logic:
Preserve learner agency requires evidence of voluntary control. Indicators include declining or modifying unsolicited suggestions, adjusting scaffold intensity, making self-initiated strategy shifts, and reporting high perceived autonomy. These signals indicate whether learners remain active decision-makers or increasingly rely on system direction.
Design for transparency and explainability examines explanation-use behaviors and trust calibration, assessing whether learners calibrate trust appropriately rather than uniformly accepting or rejecting guidance. The distinction between transparency (visible rationale) and interpretability (learner understanding) becomes important here. Transparency supports interpretability, and interpretability strengthens monitoring.
Enable scaffold fading with responsiveness is evaluated through independence curves: reduced reliance on high-intensity scaffolds, coherent transitions from directive to reflective cues, and healthy relapse–recovery patterns. These patterns indicate whether fading preserves challenge while supporting the learner's growing competence.
Adapt responsively, with integrity requires evidence that adaptivity remains anchored to learning goals. Evaluators look for alignment checks, preserved challenge levels, and signs that learners retain decision authority when conditions shift. Misaligned moves such as silent simplification or unexplained reordering signal weakening of process integrity.
Balance personalization with purpose is evaluated via roadmap adherence (variation without drift), relevance–purpose balance (engagement without losing direction), and learners’ ability to reflect on personalization choices. Effective personalization strengthens identity-relevant engagement and volition without distorting the developmental arc.
Across principles, these indicators create a shared measurement logic by identifying empirical traces that reveal whether AI is functioning as a co-regulator that supports SRL's mechanisms or as an optimizer that redirects them. This distinction is central to evaluating AI's impact on SRL.
Because the design principles articulated here concern regulatory conditions rather than outcomes, Table 2 maps each principle onto SRL processes, observable interactional evidence, and evaluation logics without presuming learning gains, including how learners engage with moments of productive difficulty and metacognitive friction over time.
Table 2
| Design principle | COPES-aligned regulatory processes | Observable learner interactional evidence | Process-focused evaluation criteria |
|---|---|---|---|
| Preserve learner agency | E, S — Evaluations and Standards (How learners interpret discrepancies, judge control, and initiate regulation) |
| Evaluation should examine whether learners retain decision authority during moments of difficulty; specifically, whether AI support preserves learners’ role in evaluating discrepancies and setting standards for action, rather than collapsing judgment into automated recommendations. |
| Design for transparency and explainability | C, E — Conditions and Evaluations (What cues are visible to learners, and how learners interpret system interventions) |
| Evaluation should examine whether transparency mechanisms support learners’ interpretive work; specifically, whether explanations improve learners’ ability to evaluate cues, calibrate trust, and integrate system information into their regulatory judgment rather than substituting for it. |
| Enable scaffold fading with responsiveness | O, P, E — Operations, Products, and Evaluations (How learners execute strategies, generate work products, and judge when support is needed) |
| Evaluation should trace independence trajectories over time; examining whether reductions in scaffold intensity coincide with stable or improving performance, increased strategic initiative, and rapid recovery after difficulty, rather than premature withdrawal or persistent dependence. |
| Adapt responsively, with integrity | C, S, E — Conditions, Standards, and Evaluations (How task environments are shaped, what counts as “good” performance, and how discrepancies are judged) |
| Evaluation should examine whether adaptive moves preserve goal-aligned standards and learner decision authority. It should consider whether changes to conditions remain interpretable, contestable, and consistent with intended learning trajectories, rather than drifting toward efficiency-driven or opaque optimization. |
| Balance personalization with purpose | S, P — Standards and Products (What counts as successful learning and how learner outputs accumulate along a developmental trajectory) |
| Evaluation should assess whether personalization preserves coherence between learner products and shared standards over time. It should consider whether variation enriches engagement without fragmenting the developmental arc or obscuring what counts as progress. |
Mapping design principles to COPES-aligned SRL processes, observable interactional evidence, and process-focused evaluation implications.
The table organizes each principle in relation to the COPES-aligned regulatory processes it shapes, the observable interactional traces it produces, and process-focused evaluation criteria, emphasizing how co-regulation can be examined through learner–system interaction over time.
5.3 Proposed evaluation categories
These evaluative signals consolidate into four categories that clarify whether AI is functioning as a co-regulator or primarily as a performance optimizer. This structure is consistent with work showing that micro-analytic and trace-based SRL indicators can be validly aggregated when grounded in SRL theory and construct-aligned measurement (Fan et al., 2022).
Autonomy and Regulatory Agency captures patterns of voluntary engagement and strategic independence within the regulatory cycle. Rather than focusing on isolated behaviors, this category examines whether learners consistently exercise decision authority across interactions with system prompts, scaffolds, and recommendations.
Clarity and Interpretability of System Behavior addresses whether learners can meaningfully interpret system interventions. At this level, the focus shifts from specific explanation-use behaviors to the broader question of whether system visibility translates into informed regulatory judgment.
Process Integrity and Alignment evaluates whether adaptive system behaviors remain anchored to learning goals and preserve regulatory challenge over time. This category considers the cumulative coherence of adaptive adjustments rather than individual alignment checks.
Developmental Trajectory and Skill Growth examines longitudinal patterns of regulatory evolution, including increasing independence, stable relapse–recovery cycles, reflective engagement, and movement along the intended competence sequence.
Together, these categories provide a shared evaluative logic for distinguishing co-regulation from performance optimization. They foreground how regulation unfolds over time rather than focusing solely on performance outcomes.
5.4 Evaluating relational dynamics
Evaluation cannot stop at cognitive or behavioral signals; relational conditions shape how regulatory moves are interpreted and enacted. Learners' interpretive experience, reflected in interactional cues such as whether prompts are attuned, whether explanations clarify or overwhelm, and whether timing supports or interrupts, directly influences monitoring, effort, and willingness to engage with productive difficulty (Conijn et al., 2023; Lim et al., 2023).
Evaluating relational quality involves both subjective and behavioral measures: perceived attunement, trust calibration, clarity of interactional signals, and willingness to question or override suggestions. Trace patterns further complement these data by showing relational behavior in action, such as dismissing misaligned prompts, revisiting rationale logs after confusion, or pausing interactions when timing feels intrusive, consistent with micro-analytic SRL approaches that combine trace evidence with learner articulation at decision points (Cleary and Callan, 2017).
When relational dynamics are well-aligned, AI interactions expand learners' sense of agency, clarity, and competence. When misaligned, they generate friction that disrupts monitoring and control rather than deepening them. Attending to this layer ensures that evaluation captures not only what the system does, but how learners interpret and respond to its presence.
Framed this way, the evaluative question shifts from whether the system worked to whether it supported the learner's regulatory growth. Grounding evaluation in both SRL theory and relational interaction patterns preserves accountability while situating learner experience as a core dimension of self-regulation in AI-supported environments, a foundation that becomes critical when considering how these principles extend across contexts.
6 Implications across contexts
With the design principles and evaluation architecture established, this section examines how they reshape SRL across authentic learning settings. Workplace learning is included here not as a distinct theoretical domain but as a setting in which SRL demands intensify, making the implications of AI co-regulation especially visible. Across classrooms, workplaces, and institutions, the central question is whether AI preserves the conditions under which SRL develops. The following subsections examine how that question plays out in practice.
6.1 Educational practice
Applying the co-regulatory framing in educational practice reshapes how learners encounter challenge and how instructors structure the surrounding ecology. In classrooms, relationally aligned systems make the rhythms of SRL more visible and interpretable, including goal-setting, monitoring, and strategic adjustment. Rather than relying on automated hints or fully optimized pathways, learners encounter scaffolds that preserve productive metacognitive friction and provide calibrated choice. In doing so, they avoid nudges toward surface-level optimization that may bypass durable self-regulatory development (Weijers et al., 2023). This keeps key decision points for monitoring and control visible and interpretable and reinforces regulatory behaviors that transfer across contexts and courses.
Instructors shift from compensators who fill gaps in strategy use to stewards of the regulatory environment. They use learner-facing traces such as scaffold-level shifts, declined prompts, and rationale-use patterns to understand how students navigate strategies. This supports more precise, dialogic guidance and helps reduce inequities in support, particularly for students reluctant to seek help publicly. In these contexts, help seeking functions as a socially conditioned, strategic regulatory decision rather than a neutral request for assistance (Karabenick and Berger, 2013).
Taken together, these shifts redirect classroom AI use from performance optimization toward regulatory development. They strengthen reflective practice, widen access to strategic support, and clarify how learner choice, system prompts, and teacher interpretation work together to sustain SRL.
6.2 Workplace learning and professional development
Beyond classroom settings, workplace learning typically places greater regulatory responsibility on learners and is embedded in performance pressures. Relationally aligned AI helps workers navigate complexity without automating judgment. Adjustable support levels, brief, clear rationales, and visible adaptation logic allow learners to retain ownership of their workflow and the regulatory decisions within it, using the system to strengthen planning, monitoring, and strategic reflection.
These features counter a familiar organizational risk identified earlier in this manuscript: shortcut thinking, or prematurely collapsing problem-framing into quick solutions. Efficiency-driven systems can unintentionally bypass the reflective and problem-framing activities that constitute professional learning, a pattern documented in studies showing how optimization and reliance on automated support can, over time, reduce evaluative engagement and independent judgment (Zhai et al., 2024). Relational design keeps developmental aims foregrounded by preserving challenge, surfacing alternatives, and offering explanations that invite evaluation instead of passive acceptance.
At the organizational level, responsibly captured trace indicators, including scaffold adjustments, explanation-use patterns, and strategic shifts, provide insight into how employees build regulatory competence. Used with care, these signals can inform professional development efforts by surfacing how regulatory competence develops over time without sliding into surveillance logic. When implemented well, co-regulatory AI sustains agency amid shifting demands and supports an ecosystem in which reflective expertise can develop.
6.3 Equity and inclusion
Equity concerns run through every layer of SRL. Learners enter environments with unequal histories of preparation, confidence, and access to strategic support (Antonelli et al., 2020). These differences shape how they interpret both friction and assistance. Relationally aligned design helps mitigate these disparities by offering optional, interpretable scaffolds that support learners who might hesitate to seek help, while preserving the agency required for SRL to develop (Karabenick and Berger, 2013).
These risks operate at both interactional and structural levels, shaping how AI responds to learners' moment-to-moment cues and how prior exposure to institutional norms influences trust and interpretability. To address these risks, transparent explanations, adjustable scaffold intensity, and visible rationale guard against inequitable patterns introduced when AI adapts too quickly or based on shallow behavioral signals. Over-scaffolding can restrict opportunities to build independence, while under-scaffolding can mask disengagement behind procedural compliance. The evaluation categories from Section 5, especially autonomy, interpretability, and process integrity, become essential tools for detecting drift that may affect learner groups unevenly.
Equity implications also extend to institutional responsibility, including monitoring whether personalization reinforces or disrupts opportunity gaps, ensuring support patterns distribute fairly, and checking whether learners who benefit most from strategic scaffolding actually receive it. Relationally aligned AI widens access to reflective support without reducing diversity to algorithmic assumptions, ensuring that growth reflects developing regulatory capacities rather than prior familiarity with institutional norms.
6.4 Institutional adoption and governance
Institutional decisions play a substantial role in determining whether co-regulatory AI strengthens SRL or unintentionally undermines it (Williamson et al., 2023). Governance should treat relational design as a shared responsibility rather than a technical afterthought. Transparent norms, such as adaptation logs, learner-visible rationale, and adjustable scaffold levels, give students and instructors a clear view of how the system participates in regulation. Without these structures, adaptivity can quietly reallocate decision authority to automated pipelines (Williamson et al., 2023).
Institutions also require systematic oversight of AI behavior across contexts and populations. The evaluation criteria from Section 5 provide a rubric for auditing alignment with learning goals, monitoring drift toward performance optimization, and ensuring that support patterns do not reproduce inequities. This oversight requires human interpretive judgment, not only technical audits. Instructors and learners must be able to see and question system behavior that misaligns with developmental purpose.
Professional learning is equally important for responsible adoption. Educators need time and support to interpret trace data, understand system rationales, and integrate insights into pedagogical decisions. With this infrastructure in place, institutions can cultivate environments in which AI operates as a transparent and trustworthy support structure, amplifying strategic development rather than narrowing it.
Across the contexts examined in this section, the central implication is practical rather than merely conceptual: AI can expand opportunities for planning, monitoring, and control only when adoption choices preserve learner decision authority over time. Responsible adoption therefore becomes an institutional design problem, not a local feature choice. Section 7 turns from these contextual implications to the framework's field-level contribution, its boundaries, and the empirical questions it opens.
7 Discussion
This manuscript advances a central claim: AI's impact on self-regulated learning depends less on technical sophistication than on how it participates in learners' regulatory cycles. As AI becomes embedded in planning, monitoring, and control, the question shifts from whether systems personalize or automate learning to whether their interventions preserve agency and sustain productive metacognitive friction, thereby keeping decision authority with the learner. SRL theory locates this distinction at the level of cue interpretation, strategy selection, and alignment with internal standards.
Framing AI as a co-regulator clarifies the mechanism beneath the field's recurring polarization between personalization optimism and automation anxiety: AI reshapes the conditions under which regulation unfolds. Timing, rationale visibility, and scaffold intensity influence how learners compare their actions to internal standards and decide when to persist, revise, or explore alternatives. These design choices function as structural influences on SRL mechanisms. Relational design sharpens this point by recognizing that learners interpret system behavior through perceived attunement, trust calibration, and clarity of explanations that shape monitoring accuracy and control. Relational alignment functions as a precondition for AI to support SRL without distorting regulatory processes, shaping how system timing, explanations, and learner control are interpreted within the COPES regulatory cycle. The five design principles and accompanying evaluation framework translate this argument into design and measurement terms, making it possible to examine whether AI preserves or displaces the regulatory work of monitoring, control, and revision.
At the same time, this framework carries inherent boundaries. It does not resolve questions about how AI should behave in every learning context or prescribe fixed scaffold intensities or adaptation thresholds. Its claims remain conceptual rather than predictive, and its usefulness depends on empirical studies capable of testing where relational alignment enhances SRL and where contextual constraints, system behaviors, or learner variability complicate its assumptions. In this sense, the framework is best understood as a relational design orientation at the level of regulatory processes, rather than as a claim about the capabilities of current AI systems.
A related boundary concerns the limits of current AI systems as these design principles are enacted in practice. Within this framework, they are articulated as interactional targets rather than claims about full technical transparency. While current large language models remain only partially interpretable, the framework specifies conditions under which system behavior can be structured at the interactional level to support reflective engagement and preserve learner judgment. Here, transparency is approached as a property of interaction shaped through timing, explanation, and learner control rather than a complete account of internal model processes.
This perspective also generates testable hypotheses for future research. If relationally aligned co-regulation strengthens SRL, learners should show (a) more accurate monitoring judgments when explanations make adaptive logic visible, (b) greater initiative in strategy changes when agency is preserved through adjustable scaffolds, and (c) more coherent independence curves during fading, including productive relapse–recovery cycles. Conversely, systems that over-direct or obscure rationale should produce patterns consistent with regulatory displacement: passive acceptance of recommendations, premature reliance on high-intensity scaffolds, or diminished engagement with planning and revision. These predictions extend SRL theory into AI-mediated contexts and offer a foundation for cumulative empirical work (Banihashem et al., 2025).
As a conceptual contribution, this framework offers the field a way to analyze AI not only through efficiency or personalization outcomes, but through how it structures the regulatory conditions of learning. It reframes AI's role in SRL as participation in a regulatory ecology shaped by timing, transparency, interactional attunement, and alignment with purpose, and provides a basis for evaluating whether these conditions strengthen or displace learners' decision authority. The framework therefore invites empirical validation, refinement, and context-sensitive experimentation while positioning AI-supported learning as a matter of regulatory stewardship rather than technical deployment alone.
8 Conclusion and call to collaborate
Across this manuscript, we have argued that AI's role in self-regulated learning is not to optimize learning on the learner's behalf, but to participate in regulatory cycles in ways that preserve agency, sustain productive metacognitive friction, and deepen reflective practice. By grounding design in SRL theory and extending it through a relational lens, this framework clarifies what responsible AI–SRL design must preserve and how those commitments can be evaluated in practice.
Yet this framework is not a destination. It is a starting point for a field still discovering how AI reshapes the conditions of learning across classrooms, workplaces, and institutions. Many of the most important questions remain empirical: How do learners experience AI support over time? Which adaptation patterns sustain challenge rather than eliminate it? How do relational dynamics such as trust, clarity, and attunement shape strategy use and long-term regulatory development? These answers will not emerge from theory or system design alone; they require inquiry across disciplines and contexts. By pairing principled design with process-sensitive evaluation, the field can begin to understand where AI genuinely supports SRL and where it risks redirecting or diminishing it.
Collaboration is especially crucial because relationally aligned AI cannot be fully validated through small-scale prototypes or controlled studies alone. It requires experimentation across varied learning ecologies, shared datasets that document regulatory processes rather than only outcomes, and partnerships between educators, learning designers, technologists, and researchers. Institutions adopting AI-supported learning environments can play a vital role by foregrounding transparency, building norms for reflective use, and contributing insights that refine both design and evaluation. Such collaborations can illuminate not only whether AI can strengthen SRL, but how to ensure it does so consistently, equitably, and with respect for learners' autonomy and dignity.
As AI systems assume more central roles in learning environments, the challenge shifts from technical performance to relational responsibility: ensuring that design choices sustain learners' capacity to direct their own growth. In this sense, the future of SRL is a shared developmental process shaped through ongoing human–AI interaction.
The co-regulation framework offered here is meant to be tested, challenged, and extended through partnerships that examine its limits, refine its design moves, and strengthen its practical value. The path forward is collaborative, and its progress depends on collective stewardship of AI systems that honor the complexity, dignity, and developmental potential of human learning.
Statements
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
Author contributions
MA: Conceptualization, Methodology, Writing – original draft, Writing – review & editing.
Funding
The author declares that financial support was not received for this work and/or its publication.
Conflict of interest
The author declares that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author declares that generative AI was used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
AntonelliJ.JonesS. J.Backscheider BurridgeA.HawkinsJ. (2020). Understanding the self-regulated learning characteristics of first-generation college students. J. Coll. Stud. Dev.61 (1), 67–83. 10.1353/csd.2020.0004
2
BabayevJ. (2025). Algorithmic autonomy or dependence? A mixed-methods study on AI personalization and self-regulated learning in higher education. J. Azerbaijan Lang. Educ. Stud.2 (4), 32–40. 10.69760/jales.2025004002
3
BanduraA. (1997). Self-Efficacy: The Exercise of Control. New York: W.H. Freeman.
4
BanihashemS. K.BondM.BergdahlN.KhosraviH.NorooziO. (2025). A systematic mapping review at the intersection of artificial intelligence and self-regulated learning. Int. J. Educ. Technol. High. Educ.22. 10.1186/s41239-025-00548-8
5
BorchersC.ZhangJ.BakerR. S.AlevenV. (2024). Using think-aloud data to understand relations between self-regulation cycle characteristics and student performance in intelligent tutoring systems. LAK ‘24: Proceedings of the 14th Learning Analytics and Knowledge Conference, 529–539. 10.1145/3636555.3636911
6
ClearyT. J.CallanG. L. (2017). “Assessing self-regulated learning using microanalytic methods”, in Handbook of Self-Regulation of Learning and Performance, 2nd Edn, eds. D. H. Schunk, and J. A. Greene (New York, NY: Routledge), 338–351. 10.4324/9781315697048-22
7
ConijnR.KahrP.SnijdersC. (2023). The effects of explanations in automated essay scoring systems on student trust and motivation. J. Learn. Anal.10 (1), 37–53. 10.18608/jla.2023.7801
8
Doshi-VelezF.KimB. (2017). Towards a rigorous science of interpretable machine learning. arXiv [Preprint]. arXiv:1702.08608. Available online at: https://arxiv.org/abs/1702.08608(Accessed November 19, 2025).
9
du PlooyE.CasteleijnD.FranzsenD. (2024). Personalized adaptive learning in higher education: a scoping review of key characteristics and impact on academic performance and engagement. Heliyon10 (21), e39630. 10.1016/j.heliyon.2024.e39630
10
FanY.van der GraafJ.LimL.RakovićM.SinghS.KilgourJ.et al (2022). Towards investigating the validity of measurement of self-regulated learning based on trace data. Metacogn. Learn.17 (3), 949–987. 10.1007/s11409-022-09291-1
11
HolmesW.BialikM.FadelC. (2021). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Boston, MA: Center for Curriculum Redesign.
12
JärveläS.NguyenA.MolenaarI. (2023). Advancing SRL research with artificial intelligence. Comput. Hum. Behav.147, 107847. 10.1016/j.chb.2023.107847
13
KalyugaS. (2007). Expertise reversal effect and its implications for learner-tailored instruction. Educ. Psychol. Rev.19 (3), 509–539. 10.1007/s10648-007-9054-3
14
KarabenickS. A.BergerJ.-L. (2013). “Help seeking as a self-regulated learning strategy”, in Applications of Self-Regulated Learning Across Diverse Disciplines, eds. H. Bembenutty, T. J. Cleary, and A. Kitsantas (Greenwich, CT: IAP Press), 237–261. 10.1108/978-1-62396-134-320251009
15
KhalilM.WongJ.WassonB.PaasF. (2024). Adaptive support for self-regulated learning in digital learning environments. Br. J. Educ. Technol.55 (4), 1281–1289. 10.1111/bjet.13479
16
KoedingerK. R.CorbettA. T.PerfettiC. (2012). The knowledge-learning-instruction framework: bridging the science-practice chasm to enhance robust student learning. Cogn. Sci.36 (5), 757–798. 10.1111/j.1551-6709.2012.01245.x
17
KumarH.XiaoR.LawsonB.MusabirovI.ShiJ.WangX.et al (2024). Supporting self-reflection at scale with large language models: insights from randomized field experiments in classroomsProceedings of the Eleventh ACM Conference on Learning @ Scale, 86–97. 10.1145/3657604.3662042
18
LimL.BannertM.van der GraafJ.FanY.RakovicM.SinghS.et al (2023). How do students learn with real-time personalized scaffolds?Br. J. Educ. Technol.55 (4), 1309–1327. 10.1111/bjet.13414
19
MillerT. (2019). Explanation in artificial intelligence: insights from the social sciences. Artif. Intell.267, 1–38. 10.1016/j.artint.2018.07.007
20
MillsS.SætraH. S. (2022). The autonomous choice architect. AI Soc.39, 583–595. 10.1007/s00146-022-01486-z
21
MolenaarI. (2022). The concept of hybrid human-ai regulation: exemplifying how to support young learners’ self-regulated learning. Comput. Educ. Artif. Intell.3, 100070. 10.1016/j.caeai.2022.100070
22
PintrichP. R.SmithD. A. F.GarcíaT.McKeachieW. J. (1991). A Manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ). Ann Arbor, MI: University of Michigan, National Center for Research to Improve Postsecondary Teaching and Learning.
23
RyanR. M.DeciE. L. (2000). The “what” and “why” of goal pursuits: human needs and the self-determination of behavior. Psychol. Inq.11 (4), 227–268. 10.1207/S15327965PLI1104_01
24
SiadatyM.GaševićD.HatalaM. (2016). Trace-based micro-analytic measurement of self-regulated learning processes. J. Learn. Anal.3 (1), 183–214. 10.18608/jla.2016.31.11
25
StrielkowskiW.GrebennikovaV.LisovskiyA.RakhimovaG.VasilevaT. (2024). AI-driven adaptive learning for sustainable educational transformation. Sustain. Dev.33 (2), 1921–1947. 10.1002/sd.3221
26
SwellerJ.van MerrienboerJ. J. G.PaasF. G. W. C. (1998). Cognitive architecture and instructional design. Educ. Psychol. Rev.10 (3), 251–296. 10.1023/A:1022193728205
27
TankelevitchL.KewenigV.SimkuteA.ScottA. E.SarkarA.SellenA.et al (2024). The metacognitive demands and opportunities of generative AI. Proceedings of the CHI Conference on Human Factors in Computing Systems, 1–24. 10.1145/3613904.3642902
28
TinajeroC.MayoM. E.VillarE.Martínez-LópezZ. (2024). Classic and modern models of self-regulated learning: integrative and componential analysis. Front. Psychol.15:1307574. 10.3389/fpsyg.2024.1307574
29
WeijersR.de KoningB.VermettenY.PaasF. (2023). Nudging autonomous learning behavior: three field experiments. Educ. Sci.13 (1), 49. 10.3390/educsci13010049
30
WilliamsonB.EynonR. (2020). Historical threads, missing links, and future directions in AI in education. Learn. Media. Technol.45 (3), 223–235. 10.1080/17439884.2020.1798995
31
WilliamsonB.MacgilchristF.PotterJ. (2023). Re-examining AI, automation and datafication in education. Learn. Media Technol.48 (1), 1–5. 10.1080/17439884.2023.2167830
32
WinneP. H.HadwinA. F. (1998). “Studying as self-regulated learning”, in Metacognition in Educational Theory and Practice, eds. HackerD. J.DunloskyJ.GraesserA. C. (Mahwah, NJ: Erlbaum), 277–304.
33
WinneP. H.HadwinA. F. (2008). “The weave of motivation and self-regulated learning”, in Motivation and Self-regulated learning: Theory, Research, and Applications, eds. SchunkD. H.ZimmermanB. J. (New York, NY: Lawrence Erlbaum Associates), 297–314.
34
WoodD.BrunerJ. S.RossG. (1976). The role of tutoring in problem solving. J. Child Psychol. Psychiatry17 (2), 89–100. 10.1111/j.1469-7610.1976.tb00381.x
35
ZhaiC.WibowoS.LiL. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review. Smart Learn. Environ.11, 1–37. 10.1186/s40561-024-00316-7
36
ZimmermanB. J. (2000). “Attaining self-regulation: a social cognitive perspective”, in Handbook of Self-Regulation, eds. BoekaertsM.PintrichP. R.ZeidnerM. (San Diego, CA: Academic Press), 13–39. 10.1016/B978-012109890-2/50031-7
Summary
Keywords
adaptive systems, AI-in-education, co-regulation, interpretability, metacognition, relational design, self-regulated learning
Citation
Agustin MC (2026) AI as a co-regulator: relational design for strengthening self-regulated learning. Front. Educ. 11:1761602. doi: 10.3389/feduc.2026.1761602
Received
05 December 2025
Revised
31 March 2026
Accepted
02 April 2026
Published
15 April 2026
Volume
11 - 2026
Edited by
Danny Glick, University of California, Irvine, United States
Reviewed by
Chien-Sing Lee, Sunway University, Malaysia
Huiyu Zhang, Temasek Polytechnic, Singapore
Updates
Copyright
© 2026 Agustin.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Matthew Christian Agustin matt@responsibleinnovationlab.org; mcagusti@asu.edu
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.