Large Language Models (LLMs) have rapidly transformed the possibilities for educational technology. Their ability to generate, summarize, and interpret text at scale allows for innovative support in teaching, learning, and assessment. LLMs can enhance personalized learning, provide feedback, and assist educators in managing diverse classrooms. One emerging evolution of LLMs is in agentic AI systems, where models act autonomously to guide learning, suggest interventions, or support classroom decision-making. While these applications are promising, they also raise challenges including bias, inequity, and ethical considerations. This article collection focuses on exploring how LLMs can be leveraged to improve educational outcomes, raise teaching standards, and promote inclusive and equitable learning environments, with agentic AI representing one pathway through which LLM capabilities can be operationalized in educational contexts.
This Research Topic aims to explore how LLMs can be harnessed to create educational systems that are both effective and equitable. While LLMs are increasingly used for content generation, personalized tutoring, and automated assessment, their potential to support inclusive and fair education remains largely underexplored. We invite contributions investigating how LLMs can enhance teaching and learning, provide high-quality and unbiased feedback, support diverse learners, and reduce bias and increase accessibility in educational materials. Applications may include AI systems, where LLMs act autonomously to guide students or assist educators. The goal is to provide insights, frameworks, and empirical evidence on how LLMs can be responsibly integrated into educational settings to improve learner outcomes, foster engagement, and ensure equity. Contributions should combine technical innovation with pedagogical/psychological considerations and ethical reflection.
We welcome original research or review articles that explore applications of LLMs in education with a focus on equity, fairness, and quality. Topics of interest include, but are not limited to:
- Personalized learning and adaptive tutoring powered by LLMs - Automated and personalized feedback, assessment, and fair grading practices - Tools supporting teachers in lesson planning, content creation, or classroom management - Applications of agentic AI where LLMs are used to autonomously guide or support learning - Detection and mitigation of bias in AI-generated educational materials - Inclusive and accessible education for students with diverse abilities or backgrounds - Ethical, legal, and policy considerations in deploying LLMs in education
Manuscripts should demonstrate rigorous research, principled experiments with users (e.g., learners, teachers, or administrators), practical implementations, or theoretical insights showing how LLMs can enhance teaching, learning, and equity.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Policy and Practice Reviews
Review
Systematic Review
Technology and Code
Keywords: Large Language Models, Agentic AI, AI in Education, Fairness and Accountability, Digital Accessibility
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.