Your new experience awaits. Try the new design now and help us make it even better

SYSTEMATIC REVIEW article

Front. Psychol., 11 February 2026

Sec. Educational Psychology

Volume 17 - 2026 | https://doi.org/10.3389/fpsyg.2026.1711599

Dimensions of English language learner autonomy assessment: a systematic review of what is there and what is missing

Aiju Liu,
Aiju Liu1,2*Amelia Abdullah
Amelia Abdullah2*Hong DongHong Dong1
  • 1English Teaching Department, Shandong College of Electronic Technology, Jinan, China
  • 2School of Educational Studies, Universiti Sains Malaysia, Minden Heights, Malaysia

Extensive research highlights the importance of learner autonomy in English language acquisition. Paradoxically, existing assessments for Learner Autonomy in English Language Learning (LAELL) narrowly focus on formal learning environments. This systematic mapping review, identifies frequently used dimensions, and highlights gaps in LAELL assessments within evolving learning contexts. Adhering to PRISMA guidelines, we identified 30 studies (1996–2025) across five databases. Key findings reveal that (1) meta-cognitive dimension is the most frequently addressed in the reviewed studies, followed by motivation and social dimension, while technology, willingness, affective factors, critical thinking, political factors, and self-efficacy are least presented; and (2) adapted questionnaires predominantly originated from older frameworks (50% from 1978–2009), with only 10% since 2016. Three gaps emerge: (1) overreliance on outdated questionnaire adaptations; (2) absence of LAELL assessments designed for informal learning contexts; and (3) insufficient attention to digital competences, particularly technology literacy and critical thinking. These findings highlight a potential misalignment between current assessment tools and contemporary learning environments. We suggest that LAELL assessments be updated to include dimensions of digital competence and critical thinking, and that skill-specific assessments be developed for productive domains like writing.

Introduction

Research on learner autonomy has evolved over the past forty years (Holec, 1981; Stringer, 2024). It is commonly defined as a capacity (Little, 1996) to take control of one’s learning, which can manifest in varying degrees (Nunan, 1997). Traditionally conceptualized within formal education, this construct is being fundamentally reshaped by informal digital learning (Sockett, 2023; Zhang and Liu, 2024). Sockett and Toffoli (2012), for example, describe learner autonomy as the capacity to self-direct learning through the intentional use of online tools and resources beyond the classroom. This expansion necessitates a critical re-examination of how learner autonomy is assessed across diverse learning environments.

Given this shift, the role of assessment becomes critically important. Researchers highlight that assessment not only helps learners develop autonomy (Chong and Reinders, 2022) but also assists teachers in identifying its key dimensions and adapting instruction to diverse contexts (Lamb and Little, 2016). Considering English’s role as a lingua franca in politics, economics, and academia (Almusharraf, 2018), this review specifically examines Learner Autonomy in English Language Learning (LAELL). While numerous assessment tools for LAELL exist, they have primarily targeted formal learning contexts (Banat, 2022; Shen et al., 2020). This presents a critical research gap: the growing prevalence of informal digital learning creates a mismatch where existing assessments may not adequately capture the dimensions of autonomy relevant to the contemporary contexts. A systematic analysis of what is and is not being measured is therefore needed. Despite a growing body of research on learner autonomy, no systematic review has analyzed the dimensions underpinning LAELL assessments to identify this gap. The sole existing review (Chong and Reinders, 2022) identified commonly used assessment tools for fostering LAELL, offering valuable methodological insights but ignoring the underlying dimensions of autonomy. Building on their work, this systematic mapping review aims to examine autonomy dimensions within evolving learning environments.

LAELL is a context-dependent and multifaceted construct, comprising political, psychological, sociocultural, and technological dimensions (Oussou, 2024; Stringer, 2024). These dimensions dynamically respond to evolving educational contexts, with digital competence emerging as particularly critical within technology-mediated learning contexts (Chen and Liu, 2024). This focus on digital competence aligns with contemporary learners. The initial label digital natives (Evans and Robertson, 2020; Prensky, 2001a) has been largely supplanted by digital learners (Gallardo-Echenique et al., 2015), as the former is criticized for lacking empirical evidence and for inaccurately assuming younger generations possess high digital competence (Gallardo-Echenique et al., 2015).

Today, learning is increasingly shaped by digital environments, highlighting digital competence as a vital lifelong skill (Council of Europe, 2018). Supporting this, China’s CNNIC (2023) reports that over 80% of Generation Z (ages 20–29) possess basic digital competence. These findings suggest that learners raised with constant access to digital tools and social media tend to prefer technology-mediated learning over traditional methods (Sockett, 2023). This shift also demands stronger critical thinking skills to manage excessive information online (Bawden and Robinson, 2020). Consequently, there is a pressing need for research into whether existing LAELL assessments adequately address the characteristics of digital learners.

To address this gap, this review analyzed articles from 1996 to 2025 regarding current trends, dimensions, and gaps of LAELL assessments. The findings aim to provide insights for researchers and educators seeking to adopt, adapt, or develop innovative LAELL assessments suited for contemporary digital settings, ultimately fostering learners to develop learner autonomy. Specifically, the study seeks to answer the research questions outlined in Table 1.

Table 1
www.frontiersin.org

Table 1. Research questions.

Literature review

Research consistently highlights that LAELL is a dynamic and evolving concept (Stringer, 2024; Tassinari, 2012). During the era dominated by classroom-based education, learner autonomy was often perceived as an inborn yet teachable ability, primarily developed through in-class practices (Almusharraf, 2018). Nevertheless, the digital revolution has dramatically shifted LAELL research from in-class settings to informal digital learning contexts (Borges, 2022). Consequently, discussions on LAELL have increasingly extended to informal learning contexts, reflected in terms such as “online informal learning of English” (Toffoli and Sockett, 2015), “out-of-class autonomous language learning” (Lai, 2019), “extramural English” (Sundqvist and Sylvén, 2016), “the digital wilds” (Sockett, 2023), and “informal digital learning of English (IDLE)” (Zhang and Liu, 2024). This evolution reflects a transition in LAELL from teacher-dependent to self-directed learning, particularly evident in the rise of IDLE.

LAELL is inherently multidimensional, shaped by personal and contextual dimensions. Grounded in Benson’s (1997) three-dimensional framework (technical, psychological, and political), subsequent dimensions are progressively expanded (Gholami, 2016; Shen et al., 2020). This aligns with Benson’s (2001) statement, “a multidimensional capacity that will take different forms for different individuals, and even for the same individual in different contexts or at different times” (p. 47). In 2022, Borges further advanced this understanding by proposing a Complex Dynamic Model of Autonomy Development, including dimensions such as reflection, planning, and evaluation, autonomy support, and nested subsystems (e.g., motivation, identity, beliefs, and affective), as well as the broader learning context.

Despite this expansion, key dimensions essential for digital learning remain theoretically underdeveloped. While technology has traditionally been treated as an external tool or context in EFL learning (Enayati and Gilakjani, 2020; Chen, 2021), post-digital scholars argue it should be reconceived as a critical dimension that actively reshapes the cognitive, social, and affective foundations of learner autonomy (Campbell and Olteanu, 2024; Knox, 2019). Similarly, critical thinking in digital contexts extends beyond information evaluation to include an understanding of algorithm-driven content curation, platform biases, and the political economy of digital tools (Bawden and Robinson, 2020; Facione, 2000). These two dimensions are seldom integrated into LAELL assessment frameworks.

LAELL is not an “all-or-nothing” concept (Nunan, 1997, p. 92) but rather a complex, layered construct. LAELL is recognized as measurable, offering insights into learners’ progress. LAELL assessments serve multiple purposes, so that they can adapt to evolving learning environments, the traits of digital learners, and ongoing technology advancements. As such, research on LAELL assessments has expanded beyond measuring autonomy levels to examining readiness for LAELL (Kartal and Balçikanli, 2019; Oussou, 2024), and learners’ self-perceptions of autonomy (Hoa et al., 2019).

Despite the critical need for a comprehensive review of LAELL assessments, existing reviews remain scarce. The sole existing scoping review (Chong and Reinders, 2022) examined 61 articles, identifying assessment instruments such as questionnaires (n = 56), language tests (n = 19), interviews (n = 11), language tasks (n = 11), field notes (n = 2), and documents (n = 1). While this review categorized questionnaires into existing, adapted, and original types, it did not examine the evaluative dimensions used. To address this gap, our study provides a comprehensive analysis of dimensions in LAELL assessments, providing guidance for researchers developing innovative assessment tools.

Materials and methods

Research methodology and sources

We conducted a systematic mapping review for literature selection, analysis, and synthesis. This type of review is specifically designed to provide a broad overview of a research field rather than to answer a highly focused question (Dai and Ke, 2022). This approach is well-suited for outlining learner autonomy structure and identifying its dominant research trends, which aligns with the exploratory aims of our research questions. Following the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guideline, we applied its procedures throughout the study and reported the results. The search was performed across Scopus, Web of Science (WOS), Science Direct, ERIC, and supplemented by Google Scholar. This combination was essential for our systematic mapping review. While the former four databases ensure coverage of high-quality, peer-reviewed literature, the inclusion of Google Scholar expands the search to retrieve relevant grey literature and articles from a broader range of journals and conferences (Haddaway et al., 2015).

Eligibility criteria and search strings

Inclusion and exclusion criteria were set prior to the literature selection (see Table 2). The criteria focused on database, language, timeframe, publication status, methodology used, and respondents. The decision to limit the review to English-language publications ensures consistency and rigor throughout the review process. While this is a common limitation, research indicates it has minimal impact on overall conclusions (Morrison et al., 2012). Moreover, the review covers studies published from 1996, when Warschauer first highlighted the importance of investigating computer-mediated learning for English language (Warschauer et al., 1996).

Table 2
www.frontiersin.org

Table 2. The inclusion/exclusion criteria for the study.

Accurate keywords as well as their synonyms and variations were identified to retrieve more relevant documents (Grames et al., 2019). Boolean operators (i.e., AND, OR, NOT) and wildcard (i.e., an asterisk) were included in the search strings. While many studies use “factors,” “facets,” and “aspects” to examine perceptions or correlations related to learner autonomy (Fotiadou et al., 2017; Basri, 2023), we specifically targeted “dimension*” and “construct*” to ensure our assessment-focused scope. Meantime, related terms such as “learner agency” and “self-directed learning” were excluded from the search strings. While closely related, learner autonomy specifically concerns how learning is controlled, whereas agency addresses the why and who behind learning behavior, and self-directed learning often implies a broader educational context beyond language learning (Rasulova and Ottoson, 2022). This exclusion ensured that the review remained focused on instruments explicitly designed to assess LAELL. As such, the search string for Scopus, WOS, and ERIC was: (“learner autonomy” OR autonomous) AND English AND (dimension* OR construct*) AND (questionnaire* OR instrument OR assessment OR evaluation OR measure* OR rubric OR scale) in the title and abstract of papers. Due to the maximum of eight Boolean words for Science Direct and Google Scholar, the search string was: [“learner autonomy” AND English AND (dimension OR construct) AND (questionnaire OR assessment OR evaluation OR measure)].

To manage the high volume generated from Google Scholar, the search was limited to the first 100 records ranked by “relevance,” a metric determined by Google Scholar’s ranking algorithm and citation count (Beel and Gipp, 2009; Haddaway et al., 2015). This screening threshold prioritizes records with greater scholarly impact and relevance to the query (Beel and Gipp, 2009). To ensure topical relevance of the retrieved items, all 100 results underwent manual screening against the study’s predefined inclusion/exclusion criteria (see Table 2). During this process, duplicates (against both internal Google Scholar results and records from other databases) were removed, and irrelevant items were excluded.

Selection process

Manual searching: We conducted the first search following PRISMA guidelines (see Figure 1) on December 10, 2024. The initial search yielded 1,218 articles. After removing 70 duplicates and excluding 764 articles that did not meet the inclusion criteria, 384 articles remained. Two reviewers independently assessed their eligibility, and 319 articles were excluded. This left 65 studies for full-text screening, of which 37 were excluded due to insufficient information on assessments. Ultimately, 28 articles met the inclusion criteria. To ensure the review included the most recent research, a follow-up search was conducted on May 29, 2025, covering publications from December 2024 to May 2025 across the same five databases, which yielded 171 articles. After removing duplication and irrelevant studies, 2 articles were used for analysis. Altogether, 30 studies were analyzed for this review.

Figure 1
Flowchart depicting the identification of studies via databases and registers. Initially, 1,218 records were identified, with 70 duplicates removed. Out of 1,148 screened records, 764 were excluded due to document type or access issues. After screening, 384 reports remained, and 319 more were excluded for language or relevance. From 65 assessed reports, 37 were excluded due to irrelevance or lack of assessment. Finally, 28 studies were retained for review, with two additional studies found in a May 2025 follow-up, totaling 30 studies included.

Figure 1. Flow diagram for systematic reviews (adapted from Page et al., 2021).

Data collection process and quality assessment

To minimize bias and ensure reliability, we implemented a dual-review process for data collection and extraction (Pérez et al., 2020). Two reviewers (A and Z) independently extracted data using an electronic data extraction form, including the article title, abstract, reference details, country, discipline, sample size, distribution scope (e.g., national or international), methodology, reliability, and validity of the assessments. Following the inclusion criteria (see Table 2), each study underwent a quality appraisal: (1) alignment with research aims; (2) sampling justification; (3) instrument reliability; (4) instrument validity; and (5) trustworthiness of findings. Consistent with the objectives of a systematic mapping review, this assessment was used not to exclude studies, but to characterize the methodological robustness of the evidence base. This critical appraisal directly informed the interpretation of the findings, allowing for nuanced consideration of factors such as whether identified trends were supported by studies with small samples or instruments with limited validation. Inter-rater agreement for the eligibility screening was assessed using Cohen’s Kappa statistic (Pérez et al., 2020), and it was 0.700, indicating substantial agreement.

Statistical analysis

To systematically identify and define the dimensions present in LAELL assessments, we employed a hybrid content analysis approach guided by a structured codebook. This process combined deductive and inductive reasoning. First, we developed a deductive framework based on established theoretical dimensions of learner autonomy: technical, psychological, political, social, and technological (Oussou, 2024; Stringer, 2024). These five domains formed the initial structure of the codebook.

Within each domain, we then conducted an inductive thematic analysis. Two reviewers independently coded the assessment items to identify recurring themes. Throughout this process, the codebook was iteratively refined to include emerging inductive themes, along with illustrative examples drawn from the data (see Table 3). Constant comparison was used to ensure themes were grounded in the data.

Table 3
www.frontiersin.org

Table 3. Coding excerpts.

Through reviewer consensus, the inductive thematic analysis yielded fourteen distinct dimensions. To ensure a faithful representation of the literature and maintain the granularity required for a precise mapping, we have preserved these as distinct constructs. While we acknowledge potential conceptual overlaps (e.g., between affective factors and motivation), this approach avoids prematurely collapsing the varied conceptualizations present in the source instruments. Consequently, our analysis is structured around a clear conceptual hierarchy: the five deductive domains represent overarching domains on autonomy (e.g., the Psychological domain), while the fourteen inductive dimensions are the specific constructs within them (e.g., motivation, self-efficacy within the Psychological domain).

Validity and inter-rater reliability

To ensure the validity and accuracy of the coding process, two reviewers (A and Z) first independently coded a randomly selected 20% subset of the included studies. Inter-coder reliability was assessed using Cohen’s Kappa, yielding a value of 0.720, which indicates substantial agreement (McHugh, 2012). Given the complexity of autonomy constructs, coding disagreements were resolved through a structured consensus process. Reviewers jointly reexamined each conflicting item against the original source text and discussed interpretations until reaching a shared, evidence-based decision. This two-stage approach (i.e., independent coding followed by deliberative consensus) is particularly suited for research requiring high conceptual inference. It strengthens the methodological rigor of the coding framework, enhancing the robustness and credibility of the study’s findings.

Results

RQ 1: What are the current trends in LAELL assessments?

We identified a growing research trend in LAELL assessments through our literature mapping, which analyzed the distribution of relevant studies by country, publication year, participant groups, research instruments, and assessment domains.

Distribution of studies. As shown in Figure 2a, the studies were distributed across multiple countries. Iran, Vietnam, and Turkey (n = 5 each) contributed the most studies, followed by Thailand and Saudi Arabia (n = 3 each), China and Japan (n = 2 each), Indonesia, Lebanon, Malaysia, Belgium, and Morocco (n = 1 each). Geographically, 28 articles originated from Asia, while one each came from Europe and Africa. This distribution should be interpreted within the constraints of our search strategy, which was limited to English-language publications using specific terminology (e.g., dimension, construct). Nevertheless, the distribution is noteworthy, as many Asian countries traditionally follow hierarchical and teacher-centered education systems (Leong et al., 2018). The growing focus on LAELL may suggest a shift toward more student-centered learning approaches in English language education across the region.

Figure 2
A composite image with four panels.Panel A: A pie chart showing participation from various countries, each with the value of 5, except for two countries with 2 and 3 each. Countries include Turkey, Thailand, Japan, Malaysia, Iran, Saudi Arabia, Indonesia, Morocco, Vietnam, China, Lebanon, and Belgium.Panel B: A pie chart depicting student distribution: 28 university students, 1 secondary school student, and 1 six-grade student.Panel C: A line graph showing data collection methods used: seven for semi-structured interviews, 29 for questionnaires, and minimal for learning logs, field notes, and Record of Work Forms.Panel D: A bar graph illustrating the distribution of years from 2015 to 2025, with varying values, peaking at 5 in 2019.

Figure 2. (a) Distribution of articles. (b) Composition of participants. (c) Research instruments. (d) Distribution of the year.

Composition of participants. The result aligns with the scoping review by Chong and Reinder (2022), indicating that 90% of the studies focused on higher education, with the remaining studies examining secondary schools and primary schools. As shown in Figure 2b, the predominance of university-level participants (93%) likely reflects key traits of contemporary tertiary learners: (1) developed metacognitive strategies, (2) stronger intrinsic motivation for autonomous learning, (3) greater sense of responsibility, (4) critical use of teacher/peer support, and (4) higher technology proficiency for self-directed learning via IDLE.

Research instruments. Figure 2c revealed that questionnaires (n = 29) were the most used research instrument in these articles, followed by semi-structured interviews (n = 7), field notes (n = 1), learning logs (n = 1), and records of work (n = 1). Quantitative instruments dominate the LAELL assessment studies. Notably, of the questionnaires used, twenty were adapted from prior works, five were adopted directly from existing questionnaires, and only three were originally developed.

Distribution of year. As illustrated in Figure 2d, there is a lack of assessment-related research prior to 2015, likely because early studies on learner autonomy primarily focused on its description and conceptualization. Despite annual LAELL assessments from 2015 to 2025, there has been no significant increase over the past decade.

Innovation deficit in Questionnaires. Questionnaires adapted and adopted for LAELL assessment were predominantly from prior studies (n = 20). As Figure 3 demonstrates, their distribution over time is as follows: 1978–1999 (n = 6), 2002–2009 (n = 14), 2011–2015 (n = 16), and 2016–2021 (n = 4). This trend suggests many questionnaires currently in use were developed 10–20 years ago and have not been updated to reflect the evolving nature of LAELL and contemporary students.

Figure 3
Bar chart titled

Figure 3. Questionnaire adapted presentation.

Research domain. LAELL is widely acknowledged as essential for English language acquisition. However, as shown in Table 4, a large portion of LAELL-related research addressed English learning as a general concept rather than targeting specific language skills. Our systematic mapping review reveals 23 articles addressed general English learning; only a few focused on discrete skills: writing (n = 4), vocabulary (n = 2), and reading (n = 1).

RQ 2: What are evaluative dimensions frequently used for LAELL assessments?

Table 4
www.frontiersin.org

Table 4. Distribution of studies by language skill focus.

Through a hybrid coding process, fourteen dimensions of learner autonomy are presented in Figure 4 and detailed in the Appendix. It is important to note that these dimensions represent specific constructs derived from the assessment instruments reviewed, and they operate within a broader conceptual landscape. For instance, motivation and self-efficacy are psychological domain, while cognitive belongs to the technical domain. In terms of frequency, the metacognitive dimension was the most frequently represented, followed by motivation and the social dimension. In contrast, dimensions such as technology, willingness, affective factors, critical thinking, political, and self-efficacy were the least represented in the reviewed assessments.

Figure 4
Line graph showing the frequency of various skills and attributes. Metacognitive is highest at 27, followed by Motivation at 23, and Social Responsibility at 19. Cognitive is 13, Attitude is 10, Confidence is 6, Belief is 5, Technology is 4, Willingness and Affective both are 3, Critical Thinking, Political, and Self-efficacy are each 1.

Figure 4. Prevalence of dimensions in LAELL assessments.

Furthermore, a temporal analysis indicates the emphasis on specific dimensions has shifted in response to evolving learner demographics. While digital natives span four generational stages (1996–2006, 2007–2011, 2012–2017, and 2018-present) (Evans and Robertson, 2020), LAELL assessments primarily focused on the latter two stages (2012–2017 and 2018–2025). These two stages show differences in their evaluative dimensions. As demonstrated in Figure 5, the third stage (2012 to 2017) emphasized metacognitive (n = 5), motivation (n = 4), and responsibility dimensions (n = 4), while the fourth stage (2018 to 2025) emphasized metacognitive (n = 22), motivation (n = 19), cognitive (n = 8), and social dimension (n = 17). This aligns with Uslu and Durak’s (2022) findings that meta-cognitive and motivation dimensions are strong predictors of learner autonomy. Besides, the increased attention to social dimension reflects the importance of assessing peer and teacher reliance. Other psychological dimensions such as confidence, belief, attitude, willingness, and affective were also taken into the LAELL assessments. Notably, competences essential for informal digital learning, particularly critical thinking and technology, were least represented.

Figure 5
Line graph comparing various factors from 2012-2017 and 2018-2025. It shows trends for factors like metacognitive, cognitive, confidence, and others. The orange line for 2018-2025 generally shows higher values compared to the blue line for 2012-2017 across most categories.

Figure 5. Comparison between two generation stages.

In addition, the reviewed assessments indicate that these dimensions serve diverse evaluative purposes. As outlined in the Appendix, these assessments were used to measure LAELL levels (n = 22), learners’ perceptions of LAELL (n = 3), readiness for LAELL (n = 3), psychological dimension of LAELL (n = 1), and meta-cognitive dimension of LAELL (n = 1).

RQ 3: What gaps are identified from the existing LAELL assessments?

Existing research reveals four key gaps in LAELL assessment: (1) region and demography, (2) learning environments, (3) temporal factors (date/timeliness), (4) language skills, and (5) methodological shortcomings.

First, overrepresentation of Asian higher education contexts (90% of participants) limits cross-cultural generalizability. While adult learners often exhibit stronger metacognitive skills and higher motivation, this narrow focus in the literature we analyzed overlooks valuable insights from younger learners from primary and secondary levels. Future research should expand the demographic scope to provide a more comprehensive understanding of LAELL across cultural contexts and educational levels.

Second, existing LAELL assessments largely overlook informal digital learning settings. Research confirms that LAELL is dynamic and multidimensional (Benson, 2001; Nunan, 1997), with recent research on LAELL gradually transforming focus from formal to informal learning contexts (Rezai and Goodarzi, 2025). However, studies predominantly examine formal settings (n = 19, including 3 for blended learning), with only one for informal learning contexts. This is a striking imbalance given the growing prevalence of informal digital learning environments.

Third, LAELL assessments are mostly outdated. Most instruments relied on outdated questionnaires (10–20 years ago). For example, despite a recent study by Rezai and Goodarzi (2025) that adapted a questionnaire to assess LAELL through IDLE, all dimensions were adapted from Barnard et al. (2009). Of the studies reviewed, dimensions mostly adapted or adopted from prior studies include meta-cognition and motivation. In contrast, emerging dimensions such as critical thinking and technology competence (Tan et al., 2023) are underrepresented. Our review reveals that few studies realized the need to include updated dimensions except for Ruelens (2019), who expanded the metacognitive dimension to include “transferring skills across contexts.” Furthermore, existing assessments also overlook technology dimensions crucial to IDLE. Since the concept of “digital natives” was introduced in 1996, technology has profoundly influenced how individuals learn foreign languages (Evans and Robertson, 2020; Prensky, 2001a). The oversight of technological dimensions limits the reliability of LAELL assessments in the technology-driven context (Chen and Liu, 2024). These findings suggest a gap that LAELL assessment instruments should either be newly developed or adapt existing instruments with contemporary dimensions.

Fourth, LAELL assessments mostly focused on general English (77% of the studies), with limited attention to specific skills such as vocabulary, reading, and writing. However, English language learning generally refers to listening, speaking, reading, and writing. Among these, speaking and writing (output skills) are particularly challenging due to their active production demands (Shen et al., 2020; Aziz and Kashinathan, 2021). Future research should assess current LAELL levels in these output skills and explore strategies for improvement.

Last but not least, quantitative tools such as questionnaires dominate LAELL assessments (as shown in Appendix), but they often lack the depth required to fully understand the subjective experiences of learners. Triangulation, which involves using multiple qualitative research methods such as interviews, focus groups, or case studies, would allow for a deeper exploration of the subjective experiences of learners, as well as the contextual factors influencing their autonomy (Carter, 2014). These methods could provide richer insights into the affective and cognitive dimensions of LAELL that are often difficult to capture through quantitative surveys.

Discussion

This systematic review highlights a critical misalignment between contemporary language learning realities and the dimensions measured by prevailing LAELL assessments. A primary issue is the over-reliance on outdated frameworks. Approximately 70% of existing tools we analyzed are adapted from pre-2015 models designed for formal classrooms. Consequently, they prioritize dimensions like metacognition and teacher support, which are rooted in formal education contexts (Little, 1996), while largely overlooking the critical role of technology literacy in informal learning (Benson, 2005). As a result, these assessments fail to account for the autonomous learning in informal digital environments. This discrepancy stems from a longstanding research bias toward formal settings. Despite robust evidence affirming the prevalence and efficacy of IDLE (Sockett, 2023; Zhang and Liu, 2024), it is frequently treated as an experimental variable in research rather than a well-established mode of EFL acquisition (He and Zhu, 2017). This has led to a notable scarcity of assessments designed for IDLE contexts.

The dimensional focus of existing LAELL assessments reveals a critical concern. While established psychological factors like motivation and self-regulation remain central (Banat, 2022; Benson, 2011; Deci and Ryan, 2000), emerging competencies essential for learning “in the digital wild” (Han and Reinhardt, 2022) are underrepresented. For example, both technology literacy (n = 3) and critical thinking skill (n = 1) are rarely assessed, despite the former being a prerequisite for IDLE (Reinders et al., 2022) and the latter involving “reflectively making sound judgments” (Facione, 2000).

A further imbalance is evident in the linguistic scope of LAELL research. Most assessments (77%) target general English proficiency, with limited attention to discrete language skills such as listening, speaking, reading, or writing (Shen et al., 2020). It is problematic given that productive skills like speaking and writing are especially challenging due to their generative nature (Aziz and Kashinathan, 2021; Shen et al., 2020). Therefore, future research should prioritize developing assessments of learner autonomy within these specific output domains.

The review also indicates an evolution in the purpose of LAELL assessments. Moving beyond merely measuring autonomous behaviors, contemporary tools increasingly aim to diagnose learners’ psychological states, including their perceptions, readiness, and beliefs about autonomous learning (Chan, 2001; Hoa et al., 2019). This trend reflects a growing consensus that autonomy is not merely a behavioral output but is fundamentally mediated by cognitive and affective factors (Feng and Yang, 2025). By probing these dimensions, assessments can reveal not just whether learners are autonomous, but also how and why they engage with EFL learning across diverse environments.

While the hybrid deductive-inductive approach allowed us to map the LAELL comprehensively, the resulting fourteen dimensions exhibit varying levels of conceptual clarity and overlap. For instance, affective factors and willingness show substantial theoretical kinship with core psychological constructs like motivation, suggesting potential for regrouping under broader, more parsimonious categories (e.g., “Affective Dimensions”). Future research and assessment development would benefit from a more streamlined and theoretically unified framework that clearly distinguishes between enduring learner traits (e.g., self-efficacy), teachable skills (e.g., metacognitive strategies, critical thinking), and contextual enablers (e.g., access to technology, social support). Such refinement would improve the precision, comparability, and practical utility of LAELL assessments.

Finally, the geographical concentration of the reviewed studies in Asia represents both a strength and a limitation. While it provides region-specific insights, it also introduces a regional and cultural bias. The predominance of research from historically teacher-centered educational contexts may overemphasis metacognition, which relate to gaining independence from instructor-led structures. This perspective might overlook the understandings of autonomy prevalent in learner-centered educational cultures.

Conclusion

This systematic review synthesized 30 studies to map the dimensions, trends, and gaps in LAELL assessments. The findings reveal three critical issues: (1) a heavy reliance on assessment tools adapted from outdated frameworks, (2) a predominant focus on formal learning contexts, and (3) limited attention to dimensions essential for digital learning such as digital competence and critical thinking. To align assessment with the reality of contemporary EFL learning, we propose four key recommendations for research and practice.

Our first recommendation is to modernize LAELL assessment tools. Given the finding that approximately 70% of instruments are adapted from pre-2015 models, we recommend to update or newly develop LAELL assessments. This consideration is critical because current instruments were primarily designed for pre-digital learners, whereas today’s learners rely heavily on digital technology for informal language acquisition. The adaptation of traditional dimensions for digital environments is necessary, which aligns with Ruelens (2019) who demonstrates “skill transfer across contexts.” The second recommendation is developing skill-specific LAELL scales. Our finding showed that 77% of the reviewed assessments targeted general English proficiency, with productive skills being underrepresented (Banat, 2022; Shen et al., 2020). Therefore, we recommend the development of skill-specific LAELL scales. This would address the imbalance in the research and provide more nuanced insights into how autonomy manifests in different linguistic domains. Responding to the underrepresentation of digital competence and critical thinking in LAELL assessments, we recommend that educators explicitly integrate these skills into EFL curricula. This could involve modules on “learning how to be a critical digital learner,” and “providing students strategies for IDLE.” Besides, educators could further investigate the correlation between specific dimensions and IDLE performance using reflective journals or interviews. Finally, the methodological patterns observed in the review indicate a predominance of quantitative surveys. To gain a deep understanding of LAELL, future studies could prioritize mixed-methods and longitudinal designs. Combining quantitative scales or tests with qualitative methods like interviews, reflective journals, and learning logs to capture the complexity of LAELL in digital environments. While quantitative surveys dominate the current studies (see Figure 2), qualitative approaches can reveal learners’ subjective experiences, track their autonomy development, and document informal learning practices.

The study has two primary limitations. First, we excluded terms such as “learner agency” or “self-directed learning” to maintain conceptual precision, as these represent distinct theoretical frameworks despite frequent conflation with learner autonomy. This exclusion may explain the pronounced Asian focus in our findings, since European researchers frequently use these alternative terms. Second, database coverage was limited to English-language publications (Scopus, WOS, ScienceDirect, ERIC, and Google Scholar), omitting relevant non-English studies. For instance, although learner autonomy is emphasized in China, studies in Chinese databases such as CNKI were excluded (Farivar and Rahimi, 2015). Future research should expand search terms and include multilingual databases to ensure a more comprehensive coverage of LAELL assessments.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

AL: Conceptualization, Formal analysis, Methodology, Writing – original draft. AA: Supervision, Validation, Investigation, Writing – review & editing. HD: Resources, Writing – review & editing.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Acknowledgments

The author would like to thank the reviewers for their constructive and insightful comments and suggestions.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2026.1711599/full#supplementary-material

References

Almusharraf, N. (2018). English as a foreign language learner autonomy in vocabulary development: variation in student autonomy levels and teacher support. J. Res. Innov. Teach. Learn. 11, 159–177. doi: 10.1108/JRIT-09-2018-0022

Crossref Full Text | Google Scholar

Alzubi, A. A. F., Singh, M. K. M., and Pandian, A. (2017). The use of learner autonomy in English as a foreign language context among Saudi undergraduates enrolled in preparatory year deanship at Najran University. Adv. Lang. Lit. Stud. 8, 152–160. doi: 10.7575/aiac.alls.v.8n.2p.152

Crossref Full Text | Google Scholar

Aziz, A. A., and Kashinathan, S. (2021). ESL learners’ challenges in speaking English in Malaysian classroom. Development 10, 983–991. doi: 10.6007/IJARPED/v10-i2/10355

Crossref Full Text | Google Scholar

Baharom, N., and Shaari, A. H. (2022). Portfolio based assessment and learner autonomy practice among ESL students. J. Lang. Linguist. Stud. 18, 1289–1305.

Google Scholar

Banat, M. (2022). The exploratory practice: an approach for enhancing students’ learning process awareness. Int. J. Res. Educ. Sci. 8, 120–134. doi: 10.46328/ijres.2586

Crossref Full Text | Google Scholar

Barnard, L., Lan, W. Y., To, Y. M., Paton, V. O., and Lai, S. L. (2009). Measuring self-regulation in online and blended learning environments. The Internet and Higher Education, 12, 1–6. doi: 10.1016/j.iheduc.2008.10.005

Crossref Full Text | Google Scholar

Basri, F. (2023). Factors influencing learner autonomy and autonomy support in a faculty of education. Teach. High. Educ. 28, 270–285. doi: 10.1080/13562517.2020.1798921

Crossref Full Text | Google Scholar

Bawden, D., and Robinson, L. (2020). Information overload. Oxford encyclopedia of political decision making. Oxford: Oxford University Press.

Google Scholar

Beel, J., and Gipp, B. (2009). Google scholar’s ranking algorithm: an introductory overview. In Proceedings of the 12th international conference on scientometrics and informetrics (ISSI’09), 230–241.

Google Scholar

Benson, P. (1997). The philosophy and politics of learner autonomy. Cham: Springer, 18–34.

Google Scholar

Benson, P. (2005). Autonomy and Information Technology in the Educational. Information technology and innovation in language education. 1:173.

Google Scholar

Benson, P. (2011). Teaching and researching autonomy in language learning. (2nd ed.). Pearson Education.

Google Scholar

Benson, P. (2001). Teaching and researching autonomy in language learning. Harlow: Longman.

Google Scholar

Boonma, N., and Swatevacharkul, R. (2020). The effect of autonomous learning process on learner autonomy of English public speaking students. Indonesian J. Appl. Ling. 10, 194–205. doi: 10.17509/ijal.v10i1.25037

Crossref Full Text | Google Scholar

Borges, L. (2022). A complex dynamic model of autonomy development. Stu. Self-Access Learning J. 13, 200–223. doi: 10.37237/130203

Crossref Full Text | Google Scholar

Campbell, C., and Olteanu, A. (2024). The challenge of Postdigital literacy: extending multimodality and social semiotics for a new age. Postdigit Sci Educ 6, 572–594. doi: 10.1007/s42438-023-00414-8

Crossref Full Text | Google Scholar

Cao, D. T. P., and Pho, D. P. (2024). Learner autonomy in language learning: the development of a rigorous measuring scale. VNUHCM J. Soc. Sci. Hum. 8, 2641–2651.

Google Scholar

Carter, N. (2014). The use of triangulation in qualitative research. J. Sci. 41, 545–547.

Google Scholar

Ceylan, N. O. (2015). Fostering learner autonomy. Procedia Soc. Behav. Sci. 199, 85–93. doi: 10.1016/j.sbspro.2015.07.491

Crossref Full Text | Google Scholar

Chan, V. (2001). Readiness for learner autonomy: what do our learners tell us? Teach. High. Educ. 6, 505–518. doi: 10.1080/13562510120078045

Crossref Full Text | Google Scholar

Chen, K. T. C. (2021). The effects of technology-mediated TBLT on enhancing the speaking abilities of university students in a collaborative EFL learning environment. Appl. Ling. Review 12, 331–352. doi: 10.1515/applirev-2018-0126

Crossref Full Text | Google Scholar

Chen, J. (2022). Effectiveness of blended learning to develop learner autonomy in a Chinese university translation course. Educ. Inf. Technol. 27, 12337–12361. doi: 10.1007/s10639-022-11125-1

Crossref Full Text | Google Scholar

Chen, L. M., and Liu, C. (2024). Critical learner autonomy in the digital language learning context. TESOL J. 16:e906. doi: 10.1002/tesj.906

Crossref Full Text | Google Scholar

Chong, S. W., and Reinders, H. (2022). Autonomy of English language learners: a scoping review of research and practice. Lang. Teach. Res. 29, 607–632. doi: 10.1177/13621688221075812

Crossref Full Text | Google Scholar

CNNIC (2023). The 51st statistical report on the development of internet in China. Beijing: China Internet Network Information Center.

Google Scholar

Dafei, D. (2007). An exploration of the relationship between learner autonomy and English proficiency. Asian EFL J. 24, 24–34.

Google Scholar

Dai, C. P., and Ke, F. (2022). Educational applications of artificial intelligence in simulation-based learning: a systematic mapping review. Comput. Educ. Artif. Int. 3:100087. doi: 10.1016/j.caeai.2022.100087

Crossref Full Text | Google Scholar

Dang, T. T. (2012). Learner autonomy: a synthesis of theory and practice. The Internet Journal of Language, Cult. Soc., 35, 52–67.

Google Scholar

Deci, E. L., and Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychol. Inq. 11, 227–268. doi: 10.1207/S15327965PLI1104_01

Crossref Full Text | Google Scholar

Dixon, D. (2011). Measuring language learner autonomy in tertiary-level learners of English (doctoral dissertation). Warwick: University of Warwick.

Google Scholar

Enayati, F., and Gilakjani, A. P. (2020). The impact of computer assisted language learning (CALL) on improving intermediate EFL learners’ vocabulary learning. Int. J. Lang. Educ. 4, 96–112. doi: 10.26858/ijole.v4i2.10560

Crossref Full Text | Google Scholar

Evans, C., and Robertson, W. (2020). The four phases of the digital natives debate. Hum. Behav. Emerging Technol. 2, 269–277. doi: 10.1002/hbe2.196

Crossref Full Text | Google Scholar

Facione, P. A. (2000). The disposition toward critical thinking: its character, measurement, and relationship to critical thinking skill. Informal Logic 20:2254. doi: 10.22329/il.v20i1.2254

Crossref Full Text | Google Scholar

Feng, D., and Yang, C. (2025). Exploring the associations between informal digital English learning (IDLE), personality traits, and critical thinking among Chinese EFL learners. Int. J. Appl. Linguist. 12:12835. doi: 10.1111/ijal.12835

Crossref Full Text | Google Scholar

Farivar, A., and Rahimi, A. (2015). The impact of CALL on Iranian EFL learners’ autonomy. Procedia Soc. Behav. Sci. 192, 644–649. doi: 10.1016/j.sbspro.2015.06.112

Crossref Full Text | Google Scholar

Fotiadou, A., Angelaki, C., and Mavroidis, I. (2017). Learner autonomy as a factor of the learning process in distance education. Eur. J. Open Distance E-learning 20, 95–110.

Google Scholar

Gallardo-Echenique, E. E., Marqués-Molías, L., Bullen, M., and Strijbos, J. W. (2015). Let’s talk about digital learners in the digital era. Int. Rev. Res. Open Distrib. Learn. 16, 156–187. doi: 10.19173/irrodl.v16i3.2196

Crossref Full Text | Google Scholar

Gamble, C., Wilkins, M., Aliponga, J., Koshiyama, Y., Yoshida, K., and Ando, S. (2018). Learner autonomy dimensions: what motivated and unmotivated EFL students think. Lingua Posnan. 60, 33–47. doi: 10.2478/linpo-2018-0003

Crossref Full Text | Google Scholar

Ghobain, E. A., and Zughaibi, A. A. (2021). Examining Saudi EFL university students' readiness for online learning at the onset of the Covid-19 pandemic. AWEJ 7, 3–21. doi: 10.24093/awej/call7.1

Crossref Full Text | Google Scholar

Gholami, H. (2016). Self assessment and learner autonomy. Theory Pract. Lang. Stud. 6:46. doi: 10.17507/tpls.0601.06

Crossref Full Text | Google Scholar

Grames, E. M., Stillman, A. N., Tingley, M. W., and Elphick, C. S. (2019). An automated approach to identifying search terms for systematic reviews using keyword co-occurrence networks. Methods Ecol. Evol. 10, 1645–1654. doi: 10.1111/2041-210X.13268

Crossref Full Text | Google Scholar

Haddaway, N. R., Collins, A. M., Coughlin, D., and Kirk, S. (2015). The role of Google scholar in evidence reviews and its applicability to grey literature searching. PLoS One 10:e0138237. doi: 10.1371/journal.pone.0138237

Crossref Full Text | Google Scholar

Han, Y., and Reinhardt, J. (2022). Autonomy in the digital wilds: agency, competence, and self-efficacy in the development of L2 digital identities. TESOL Q. 56, 985–1015. doi: 10.1002/tesq.3142

Crossref Full Text | Google Scholar

He, T., and Zhu, C. (2017). Digital informal learning among Chinese university students: the effects of digital competence and personal factors. Int. J. Educ. Technol. High. Educ. 14:44. doi: 10.1186/s41239-017-0082-x

Crossref Full Text | Google Scholar

Hoa, T. M., Thuy, N. T. T., and Tran, L. T. H. (2019). The English-majored sophomores' self-perception of autonomous language learning. Engl. Lang. Teach. 12, 119–131. doi: 10.5539/elt.v12n12p119

Crossref Full Text | Google Scholar

Holec, H. (1981). Autonomy and foreign language learning. Oxford: Pergamon. Strasbourg: Council of Europe.

Google Scholar

Irgatoğlu, A., Sarıçoban, A., Özcan, M., and Dağbaşı, G. (2022). Learner autonomy and learning strategy use before and during the COVID-19 pandemic. Sustainability 14:6118. doi: 10.3390/su14106118

Crossref Full Text | Google Scholar

Kartal, G., and Balçikanli, C. (2019). Tracking the culture of learning and readiness for learner autonomy in a Turkish context. TEFLIN J. 30:22. doi: 10.15639/teflinjournal.v30i1/22-46

Crossref Full Text | Google Scholar

Knox, J. (2019). What does the ‘postdigital’ mean for education? Three critical perspectives on the digital, with implications for educational research and practice. Postdigit. Sci. Educ. 1, 357–370. doi: 10.1007/s42438-019-00045-y

Crossref Full Text | Google Scholar

Lai, C. (2019). Technology and learner autonomy: an argument in favor of the nexus of formal and informal language learning. Annu. Rev. Appl. Linguist. 39, 52–58. doi: 10.1017/S0267190519000035

Crossref Full Text | Google Scholar

Lamb, T., and Little, S. (2016). Assessment for autonomy, assessment for learning, and learner motivation: fostering learner identities. Classroom-based assessment in L2 contexts. London: CRC Press, 184–206.

Google Scholar

Larsari, V. N., and Oghli, H. S. (2016). On the effect of self-assessment and peer-assessment on Iranian EFL learners’ learner autonomy. J. Lang. Ling. 2, 26–31.

Google Scholar

Leong, W. S., Ismail, H., Costa, J. S., and Tan, H. B. (2018). Assessment for learning research in east Asian countries. Stud. Educ. Eval. 59, 270–277. doi: 10.1016/j.stueduc.2018.09.005

Crossref Full Text | Google Scholar

Little, D. (1996). Learner autonomy: some steps in the evolution of theory and practice. TEANGA 16, 1–13.

Google Scholar

Macaskill, A., and Taylor, E. (2010). The development of a brief measure of learner autonomy in university students. Stud. High. Educ. 35, 351–359. doi: 10.1080/03075070903502703

Crossref Full Text | Google Scholar

McHugh, M. L. (2012). Interrater reliability: the kappa statistic. Biochem. Med. 22, 276–282,

PubMed Abstract | Google Scholar

Mohammadi Zenouzagh, Z., Admiraal, W., and Saab, N. (2023). Learner autonomy, learner engagement and learner satisfaction in text-based and multimodal computer mediated writing environments. Educ. Inf. Technol. 28, 1–41. doi: 10.1007/s10639-023-11615-w,

PubMed Abstract | Crossref Full Text | Google Scholar

Morrison, A., Polisena, J., Husereau, D., Moulton, K., Clark, M., Fiander, M., et al. (2012). The effect of English-language restriction on systematic review-based meta-analyses: a systematic review of empirical studies. Int. J. Technol. Assessment Health Care 28, 138–144. doi: 10.1017/S0266462312000086

Crossref Full Text | Google Scholar

Nunan, D. (1997). “Designing and adapting materials to encourage learner autonomy” in Autonomy and Independence in language learning. eds. P. Benson and P. Voller (London: Longman).

Google Scholar

Orakci, S., and Gelisli, Y. (2017). Learner autonomy scale: a scale development study. Malaysian Online J. Educ. Sci. 5, 25–35.

Google Scholar

Oussou, S., Kerouad, S., and Hdii, S. (2024). Learner autonomy: Moroccan EFL university students’ beliefs and readiness. Stu. Eng. Lang. Educ. 11, 116–132. doi: 10.24815/siele.v11i1.30007

Crossref Full Text | Google Scholar

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., et al. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. bmj, 372. doi: 10.1136/bmj.n71

Crossref Full Text | Google Scholar

Pasaribu, T. A. (2020). Challenging EFL students to read: digital reader response tasks to foster learner autonomy. Teaching Eng. Technol 20, 21–41.

Google Scholar

Pérez, J., Díaz, J., Garcia-Martin, J., and Tabuenca, B. (2020). Systematic literature reviews in software engineering—enhancement of the study selection process using Cohen’s kappa statistic. J. Syst. Softw. 168:110657. doi: 10.1016/j.jss.2020.110657

Crossref Full Text | Google Scholar

Phan, T. T. H., and Huynh, S. T. (2025). Exploring English majors’ perceived autonomy in English language learning: an analysis of demographic differences. Educ. Proc. Int. J. 14:e2025081. doi: 10.22521/edupij.2025.14.81

Crossref Full Text | Google Scholar

Prensky, M. (2001a). Digital natives, digital immigrants part 1. On Horiz. 9, 1–6. doi: 10.1108/1074812011042481

Crossref Full Text | Google Scholar

Rasulova, M., and Ottoson, K. (2022). The impact of learner agency and self-regulated learning in EFL classes. Int. J. Soc. Sci. Hum. Res. 712, 712–717. doi: 10.47191/ijsshr/v5-i2-44

Crossref Full Text | Google Scholar

Reinders, H., Lai, C., and Sundqvist, P. (2022). The Routledge handbook of language learning and teaching beyond the classroom. London: Routledge.

Google Scholar

Rezai, A., and Goodarzi, A. (2025). Exploring the nexus of informal digital learning of English and online self-regulated learning in EFL university contexts: longitudinal insights. Comput. Hum. Behav. Rep. 18:100666. doi: 10.1016/j.chbr.2025.100666

Crossref Full Text | Google Scholar

Ruelens, E. (2019). Measuring language learner autonomy in higher education: the self-efficacy questionnaire of language learning strategies. Lang. Learn. High. Educ. 9, 371–393. doi: 10.1515/cercles-2019-0020

Crossref Full Text | Google Scholar

Sato, T., Murase, F., and Burden, T. (2020). An empirical study on vocabulary recall and learner autonomy through mobile? Assisted language learning in blended learning settings. CALICO J. 37, 254–276. doi: 10.1558/cj.40436

Crossref Full Text | Google Scholar

Shen, B., Bai, B., and Xue, W. (2020). The effects of peer assessment on learner autonomy: an empirical study in a Chinese college English writing class. Stud. Educ. Eval. 64:100821. doi: 10.1016/j.stueduc.2019.100821

Crossref Full Text | Google Scholar

Sockett, G. (2023). Input in the digital wild: online informal and non-formal learning and their interactions with study abroad. Second. Lang. Res. 39, 115–132. doi: 10.1177/02676583221122384

Crossref Full Text | Google Scholar

Sockett, G., and Toffoli, D. (2012). Beyond learner autonomy: A dynamic systems view of the informal learning of English in virtual online communities. ReCALL, 24, 138–151. doi: 10.1017/S0958344012000031

Crossref Full Text | Google Scholar

Stringer, T. (2024). A conceptual framework for emergent language learner autonomy–a complexity perspective for action research. Innov. Lang. Learn. Teach. 19, 452–464. doi: 10.1080/17501229.2024.2371505

Crossref Full Text | Google Scholar

Sundqvist, P., and Sylvén, L. K. (2016). Extramural English in teaching and learning: From theory and research to practice : Springer.

Google Scholar

Suwannaphim, S., and Vibulphol, J. (2023). Fostering learner autonomy of Thai lower secondary school students in project-based English instruction. LEARN J. 16, 259–272.

Google Scholar

Swatevacharkul, R., and Boonma, N. (2020). Learner Autonomy: Attitudes of Graduate Students in English Language Teaching Program in Thailand. LEARN Journal: Language Education and Acquisition Research Network. 13, 176–193.

Google Scholar

Swatevacharkul, R., and Boonma, N. (2021). Learner autonomy assessment of English language teaching students in an international program in Thailand. Indones. J. Appl. Linguist. 10, 749–759. doi: 10.17509/ijal.v10i3.31764

Crossref Full Text | Google Scholar

Tajmirriahi, T., and Rezvani, E. (2021). Learner autonomy in L2 writing: the role of academic self-concept and academic achievement. Educ. Res. Int. 2021:6074039. doi: 10.1155/2021/6074039

Crossref Full Text | Google Scholar

Tan, A. J., Davies, J. L., Nicolson, R. I., and Karaminis, T. (2023). Learning critical thinking skills online: can precision teaching help? Educ. Technol. Res. Dev. 71, 1–22. doi: 10.1007/s11423-023-10227-y,

PubMed Abstract | Crossref Full Text | Google Scholar

Tassinari, M. G. (2012). Evaluating learner autonomy: a dynamic model with descriptors. Stud. Self-Access Learn. J. 3, 24–40. doi: 10.37237/030103

Crossref Full Text | Google Scholar

Toffoli, D., and Sockett, G. (2015). University teachers’ perceptions of online informal learning of English (OILE). Comput. Assist. Lang. Learn. 28, 7–21. doi: 10.1080/09588221.2013.776970

Crossref Full Text | Google Scholar

Tuan, D. M. (2021). Learner autonomy in English language learning: Vietnamese EFL students’ perceptions and practices. Indones. J. Appl. Linguist. 11, 307–317. doi: 10.17509/ijal.v11i2.29605

Crossref Full Text | Google Scholar

Ünal, S., Çeliköz, N., and Sari, I. (2017). EFL proficiency in language learning and learner autonomy perceptions of Turkish learners. J. Educ. Pract. 8, 117–122.

Google Scholar

Uslu, N. A., and Durak, H. Y. (2022). Predicting learner autonomy in collaborative learning: the role of group metacognition and motivational regulation strategies. Learn. Motiv. 78:101804. doi: 10.1016/j.lmot.2022.101804

Crossref Full Text | Google Scholar

Van Nguyen, S., and Habók, A. (2021). Designing and validating the learner autonomy perception questionnaire. Heliyon 7:e06831. doi: 10.1016/j.heliyon.2021.e06831,

PubMed Abstract | Crossref Full Text | Google Scholar

Warschauer, M., Turbee, L., and Roberts, B. (1996). Computer learning networks and student empowerment. System 24, 1–14. doi: 10.1016/0346-251X(95)00049-P

Crossref Full Text | Google Scholar

Zhang, Y., and Liu, G. (2024). Revisiting informal digital learning of English (IDLE): a structural equation modeling approach in a university EFL context. Comput. Assist. Lang. Learn. 37, 1904–1936. doi: 10.1080/09588221.2022.2134424

Crossref Full Text | Google Scholar

Keywords: assessment, dimensions, LAELL, PRISMA, purposes

Citation: Liu A, Abdullah A and Dong H (2026) Dimensions of English language learner autonomy assessment: a systematic review of what is there and what is missing. Front. Psychol. 17:1711599. doi: 10.3389/fpsyg.2026.1711599

Received: 23 September 2025; Revised: 12 January 2026; Accepted: 13 January 2026;
Published: 11 February 2026.

Edited by:

David Pérez-Jorge, University of La Laguna, Spain

Reviewed by:

Miriam Catalina González-Afonso, University of La Laguna, Spain
Zeus Plasencia-Carballo, University of La Laguna, Spain

Copyright © 2026 Liu, Abdullah and Dong. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Aiju Liu, bGl1YWlqdUBzdHVkZW50LnVzbS5teQ==; Amelia Abdullah, YW1lbGlhQHVzbS5teQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.