AUTHOR=Panagopoulou Fereniki , Parpoula Christina , Karpouzis Kostas TITLE=Legal perspectives on AI and the right to digital literacy in education JOURNAL=Frontiers in Computer Science VOLUME=Volume 7 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2025.1692268 DOI=10.3389/fcomp.2025.1692268 ISSN=2624-9898 ABSTRACT=IntroductionArtificial intelligence (AI) is increasingly becoming a part of educational practice, providing opportunities for personalization and access but also introducing risks to equity, learner autonomy, privacy, and accountability. Focusing on European and Greek contexts, we examine whether a right to digital literacy can be grounded in existing law and how the EU AI Act reshapes duties for educational actors.MethodsWe conduct a legal-doctrinal and policy-analytic study of EU primary/secondary law, the Greek Constitution, and EU Regulations (AI Act), read alongside institutional 'grey' literature (e.g., educator toolkits, national bioethics opinions). We map AI Act recitals and Annex III to concrete governance obligations (fundamental rights impact assessment, transparency, human oversight) and test implications through targeted case vignettes (examinations, admissions, LMS/explainability). Scope is limited to EU/Greece; comparative case law is used selectively to illuminate the normative claims.ResultsFirst, a defensible right to digital literacy emerges from EU instruments and Greek constitutional provisions on participation in the information society and the mission of education. Second, many AI uses in education (e.g., admissions, outcome evaluation, proctoring) qualify as high-risk under the AI Act, triggering ex-ante and ongoing duties, while emotion recognition in education (absent medical/safety grounds) is effectively off-limits. Third, the vignette analysis shows recurring pressure points, such as bias and disparate impact, opacity in automated decisions, and excessive surveillance, where explainability and meaningful human oversight are necessary to preserve equity, autonomy, and educational quality.DiscussionWe propose an actionable governance agenda for schools and universities: mandatory fundamental-rights impact assessments adapted to educational contexts; explainability criteria as admissibility and accountability thresholds for deployed systems; clear escalation paths and complaint mechanisms; inclusive access measures to narrow the digital divide; and multi-stakeholder oversight that keeps educators central rather than substitutable. Taken together, these measures shift debate from abstract ethics to enforceable legal duties, aligning AI adoption with human-rights values and the educational mission to cultivate critical, responsible citizens.