Everyone agrees children need AI literacy. The problem is nobody agrees on what that means. And a new Royal Society report by Hillman, Holmes, and Duarte (2025) lays out just how deep that confusion runs, after reviewing 115 articles, analyzing 20 frameworks in detail, and examining 8 up close. Their conclusion is clear: most AI literacy frameworks built for schools are actually built for computing students, and they’re failing everyone else.
I’ve been making the case for years that pedagogy determines whether AI helps or hurts. This report makes me sharpen that claim. Because if the frameworks guiding AI literacy in schools are lopsided, if they teach kids how algorithms work but skip how AI reshapes democracy, labour, and their own cognitive habits, then the pedagogy built on top of those frameworks will be lopsided too.
The Three Dimensions Most Frameworks Ignore
Hillman et al. organize AI literacy into three interconnected dimensions: technological (how AI works), practical (how to use AI responsibly), and human (how AI affects people, society, and the planet). The technological dimension covers things like data, algorithms, and machine learning pipelines. The practical dimension includes prompt writing, evaluating AI outputs, and knowing when not to use AI at all. The human dimension covers ethics, rights, inclusion, democracy, environmental costs, and power asymmetries.
The heatmap the authors built across all 20 frameworks tells the story clearly. Technological coverage is strong across the board, the practical dimension gets mixed attention, and the human dimension barely registers. As Hillman et al. put it, “most frameworks remain computer science-centric, and few address democracy, governance, labour markets, environmental costs, or AI’s psychological impacts” (p. 28).

And when frameworks do name ethics, they tend to treat it as a checkbox. They’ll list “bias” or “fairness” as a competency heading with no classroom examples, no assessable outcomes, no guidance for teachers on how to actually teach it. A heading is not a curriculum. I covered a similar problem when reviewing Chee, Ahn, and Lee’s (2025) AI literacy competency framework, which at least attempted progression levels but still struggled to operationalize the ethical dimensions in ways teachers could use on Monday morning.
A Narrow Intellectual Foundation
One of the sharpest observations in this report is that most AI literacy frameworks trace back to just three foundational works: Touretzky et al. (2019), which became AI4K12; Long and Magerko (2020); and Ng et al. (2023). All three come from US computer science and human-computer interaction research. Hillman et al. point out that this creates a narrow base that leaves out the Global South, SEND and inclusion research, civic and media literacy traditions, and ecological concerns.
The risk here is real. If policymakers synthesize these frameworks and mistake the result for consensus, they’re building on an echo chamber. The authors are careful to name this: “without taking a critical and human-centric stance, UK policy risks importing academic definitions that are under-theorised for schools, rather than designing its own grounded framework” (p. 40). That warning applies well beyond the UK. Any education system leaning on existing AI literacy frameworks without questioning their origins is vulnerable to the same problem.
I wrote about something parallel when covering Roe, Furze, and Perkins’s (2025) digital plastic metaphor for critical AI literacy. Their argument is that AI literacy without critical literacy is incomplete. This Royal Society report arrives at a similar conclusion from a policy angle: frameworks that teach children to use AI confidently without teaching them to question it are doing half the job.
Teachers as Afterthought
The report finds that teacher professional development is treated as an afterthought in nearly every framework and case study reviewed. Most CPD is informal, one-off, and self-directed, which leaves teachers unprepared and underconfident. Hillman et al. argue for a teacher-first strategy: co-design curricula with educators, embed AI literacy in initial teacher education, and make the training realistic about workload pressures.
This resonates with what Villar Onrubia et al. (2025) found in their EU-wide survey: students are already using AI, and their teachers are still catching up. The adoption gap is real, and frameworks that assume teachers will simply deliver new content with minimal preparation are setting everyone up for failure.
Hillman et al. also flag a subtler issue. Corporate-funded AI literacy resources often encourage teachers to adopt AI tools for efficiency gains, things like lesson planning and grading, with little space for critical interrogation of the technology itself. The learning objective becomes “feeling confident to use AI” and not understanding when AI shouldn’t be used, who benefits from its adoption, or what data it’s collecting. That’s not literacy. That’s product training.
The Inclusion Gap No One Wants to Talk About
Inclusion, SEND, and equity are among the weakest areas across the entire body of work reviewed. Very few frameworks build in SEND-specific design from the outset, and broader equity and diversity considerations are almost entirely absent. The report highlights Song et al.’s Universal Design for Learning approach as a notable exception. I’d add that Linsenmayer’s (2025) OECD report on AI and special education, which I covered previously, reached similar conclusions about the need to centre accessibility from the design stage.
Hillman et al. recommend that UK pilots be deliberately targeted at under-resourced schools, with SEND adaptations built in as a design requirement. Too often, AI literacy initiatives start in well-resourced schools and spread outward, with accessibility bolted on later. The pattern repeats across ed-tech adoption cycles.
Assessment: The Missing Piece
Assessment is the weakest link in the field. Hillman et al. found almost no validated measures for AI literacy across all frameworks reviewed. Most rely on self-reported feedback or small qualitative studies. The EU/OECD AILit framework plans to include AI literacy in the 2029 PISA assessments, but that’s still years away.
The authors recommend light-touch, subject-embedded assessment tools that go beyond measuring technical skill to gauge critical thinking and ethical reasoning. Current approaches focus narrowly on whether students can write an algorithm or use a chatbot, which may matter for computing students but says nothing about whether a child can identify bias in a recommendation system or articulate why data privacy matters.
I’ve argued across many posts on this blog that assessment design shapes what gets taught. If we only assess technical AI competencies, that’s what classrooms will prioritize, and the human dimension will keep getting squeezed out.
What This Means for AI Pedagogy Beyond the UK
The report was written for UK policymakers, and Hillman et al. find all four nations moving in different directions: England driven by techno-optimist forecasts, Scotland prioritizing ethical civic engagement, Wales linking AI to responsible citizenship, Northern Ireland still exploring.
But the problems are global. Frameworks worldwide lean too heavily on technological knowledge, teacher development comes last, and inclusion remains an afterthought when it appears at all. Assessment barely exists as a coherent practice, and corporate influence shapes what counts as literacy in ways most policymakers don’t question.
The authors call for a phased, pilot-first approach: start with 3-5 regional pilots in diverse school settings, embed evaluation from the beginning, co-design with teachers and students, and scale only what works. They warn that “implementing AI literacy alongside the fast adoption of AI in schools (without clear evidence of positive impact to learning or teaching) can only increase the risks of inequality and deepen divides and learning outcomes” (p. 39).
References
- Chee, H., Ahn, S., & Lee, J. (2025). A competency framework for AI literacy: Variations by different learner groups and an implied learning pathway. British Journal of Educational Technology, 56, 2146-2182. https://doi.org/10.1111/bjet.13556
- Hillman, V., Holmes, W., & Duarte, T. (2025). A rapid review of AI literacy frameworks, with policy recommendations. A report prepared for the Royal Society. London: The Royal Society. https://royalsociety.org/-/media/policy/projects/ai-in-education/hillman-et-al-a-rapid-review-of-ai-literacy-frameworks.pdf
- Roe, J., Furze, L., & Perkins, M. (2025). Digital plastic: A metaphorical framework for Critical AI Literacy in the multiliteracies era. Pedagogies: An International Journal. Advance online publication. https://doi.org/10.1080/1554480X.2025.2557491
- Villar Onrubia, D., Cachia, R., Rietz, C., Feltrero, R., Niemi, H., Hallissy, M., & Reuter, R. (2025). Generative artificial intelligence in secondary education: Uses and perceptions from the perspective of early adopters across five EU Member States. Publications Office of the European Union. https://publications.jrc.ec.europa.eu/repository/handle/JRC144345
