What is next in the AI journey in education?
From Technological Adoption to Human-Centred Pedagogical Transformation
Abstract
Artificial
intelligence (AI) has moved rapidly from the periphery of educational
technology to a central position in contemporary debates about teaching,
learning, and assessment. Early stages of AI adoption in education focused
predominantly on automation, efficiency, and personalised content delivery.
However, these developments have also generated significant ethical,
pedagogical, and equity-related concerns, particularly regarding academic
integrity, learner agency, teacher professionalism, and inclusion. This essay
argues that the next phase of the AI journey in education must move beyond a
narrow focus on tools and efficiency toward a systemic, human-centred, and
pedagogically grounded transformation. Drawing on learning theory, critical AI
studies, inclusive education research, and emerging empirical evidence, the
paper examines how AI is reshaping curriculum, assessment, teacher identity,
learner wellbeing, and educational governance. Attention is given to
neurodiversity, universal design for learning, and the need to reframe AI as a
learning partner rather than a substitute for human cognition.
From Technological Adoption to Human-Centered
Pedagogical Transformation
Artificial intelligence (AI) is no
longer an emerging technology in education. Today, technologies
such as large language models, adaptive learning platforms, learning analytics,
and generative AI tools are widely integrated into classrooms, higher education
institutions, and professional learning environments worldwide. Unlike
earlier educational technologies, which primarily supported existing
pedagogical practices, AI introduces a new level of disruption. It not only
automates repetitive tasks but also extends into areas traditionally dominated
by human expertise, including writing, problem-solving, providing feedback, and
making decisions.
The current conversation about AI in education is characterised by polarisation. Supporters emphasise the unprecedented possibilities for personalisation, scalability, and enhanced learner support. In contrast, critics raise concerns about AI's potential to deskill teachers, deepen existing inequalities, and compromise the
authenticity of learning experiences. These debates indicate that the critical
issue is no longer whether AI will be implemented in education, but rather how
it will be embedded and whose values will inform its use.
This essay explores the question:
What comes next in the AI journey within education? It contends that the
forthcoming phase requires a comprehensive rethinking of pedagogy, assessment,
and the overall purpose of education. Instead of perceiving AI merely as a
collection of tools to be managed or controlled, education systems need to
cultivate approaches that are ethically grounded, inclusive, and critically
informed. Such approaches should position AI as a collaborative partner in the
learning process, while maintaining human agency and the importance of
relational teaching.
From
“AI-enhanced education” to “AI-informed pedagogy,” positioning human judgement,
ethics, and relational teaching at the centre of future educational ecosystems.
1. Introduction
Artificial
intelligence is no longer an emergent or speculative technology within
education. Large language models, adaptive learning platforms, learning
analytics, and generative AI tools are now embedded in classrooms,
universities, and professional learning environments worldwide. While early
educational technologies often supplemented existing pedagogical practices, AI
presents a more disruptive challenge, not only automating routine tasks but
also encroaching upon traditional human domains such as writing,
problem-solving, feedback, and decision-making (Luckin et al., 2016; Selwyn,
2019).
Much
of the current discourse surrounding AI in education is polarised. On one hand,
proponents highlight unprecedented opportunities for personalisation,
scalability, and learner support (Holmes et al., 2022). On the other hand, critics
warn of deskilling teachers, exacerbating inequities, and undermining authentic
learning (Biesta, 2022; Williamson & Eynon, 2020). These tensions suggest
that the central question is no longer whether AI will be used in education,
but rather how it will be integrated and whose values will shape
its deployment.
This
essay addresses the question: What is next in the AI journey in education? It
argues that the next phase requires a fundamental reimagining of pedagogy,
assessment, and educational purpose. Rather than viewing AI as a set of tools
to be managed or controlled, education systems must develop ethically grounded,
inclusive, and critically informed approaches that position AI as a partner in
learning while preserving human agency and relational teaching.
2. From Automation to Augmentation:
Reframing AI’s Educational Role
2.1 Early Stages of
AI in Education
Initial uses of AI in education focused largely on automation and
efficiency. Intelligent tutoring systems, automated grading, and adaptive
content delivery were designed to replicate aspects of teacher instruction at
scale (Anderson et al., 1995). These systems aligned closely with behaviourist
and cognitivist models of learning, emphasising mastery, repetition, and
performance optimisation.
While such approaches demonstrated measurable gains in specific domains,
they also reinforced narrow conceptions of learning as content acquisition
rather than meaning-making (Biesta, 2015). Furthermore, automation-driven
models risked reducing learners to data points and teachers to system
supervisors (Selwyn, 2019).
2.2 AI as Cognitive
and Metacognitive Support
The
next phase of AI integration increasingly emphasises augmentation rather
than replacement. Generative AI tools, for example, can scaffold brainstorming,
provide formative feedback, and model expert thinking processes (Mollick &
Mollick, 2023). When used pedagogically, these tools can
support metacognition, self-regulation, and reflective learning.
However,
augmentation is not inherently beneficial. Without explicit pedagogical
framing, AI risks becoming a cognitive crutch that undermines deep learning
(Kirschner & De Bruyckere, 2017). The future of AI in education, therefore, depends on educators’ capacity to design learning experiences that make AI use visible, intentional, and critically examined.
3. Redefining the Role of the Teacher
3.1 From Knowledge
Authority to Learning Architect
AI
challenges traditional notions of teacher authority grounded in exclusive
access to knowledge. Yet this does not diminish the role of teachers; rather,
it reconfigures it. In AI-rich environments, teachers increasingly function as
learning architects, ethical guides, and sense-makers (Fullan et al., 2020).
This
shift aligns with constructivist and sociocultural theories of learning, which
emphasise the importance of dialogue, scaffolding, and social interaction
(Vygotsky, 1978). AI can support these processes, but it cannot replace the
relational and contextual judgement that teachers bring to complex learning
environments.
3.2 Professional
Identity and Teacher Agency
A
critical risk in the next phase of AI adoption is the erosion of teacher agency
through algorithmic decision-making and platform-driven pedagogy (Williamson,
2017). If AI systems prescribe learning pathways, assessments, or interventions
without teacher interpretation, professional judgement may be marginalised.
Sustainable AI integration, therefore, requires robust professional learning focused not only on technical skills but also on critical AI literacy, data ethics, and pedagogical decision-making (OECD, 2021). Teachers must remain active agents in
shaping how AI is used, rather than passive implementers of externally designed
systems.
4. Assessment in the Age of Artificial
Intelligence
4.1 The Crisis of
Traditional Assessment
Few
areas of education have been more disrupted by AI than assessment. Generative
AI has exposed the fragility of assessment models reliant on unsupervised
written tasks and recall-based outcomes (Eaton, 2023). Attempts to “AI-proof”
assessment through surveillance or detection tools have proven both ineffective
and ethically problematic. This disruption, however, creates an opportunity to
address long-standing critiques of assessment as reductive, inequitable, and
misaligned with authentic learning (Boud & Falchikov, 2007).
4.2 Toward Authentic
and Process-Oriented Assessment
The
next stage of AI-informed assessment emphasises:
- Process over
product
- Formative
feedback over summative judgement
- Transparency
over concealment of AI use
Authentic
assessments, such as portfolios, oral defences, design projects, and reflective
commentaries, make learning visible and value higher-order thinking (Wiggins,
1998). When AI is explicitly incorporated into assessment design, students can
be evaluated on their ability to use AI critically, ethically, and creatively
rather than covertly.
5. AI Literacy as a Foundational
Educational Outcome
5.1 Beyond Technical
Skills
AI
literacy extends beyond knowing how to use tools. It encompasses understanding
how AI systems are trained, how bias and power operate within algorithms, and
how AI reshapes knowledge production (Ng et al., 2021).
The
next phase of education must integrate AI literacy as a core capability alongside
traditional literacy. This includes:
- Functional AI
literacy (use and interaction)
- Critical AI
literacy (ethics, bias, governance)
- Creative AI
literacy (co-design and innovation)
5.2 Democratic and
Ethical Imperatives
Without
widespread AI literacy, educational AI risks reinforcing existing inequalities,
as only privileged learners gain the skills to question, adapt, and shape AI
systems (Noble, 2018). Embedding AI literacy within compulsory education is
therefore both an educational and democratic imperative.
6. Inclusion, Neurodiversity, and
Wellbeing
6.1 AI and Universal
Design for Learning
AI
holds significant promise for inclusive education when aligned with Universal
Design for Learning (UDL) principles (CAST, 2018). Adaptive interfaces,
multimodal content, and personalised pacing can reduce barriers for
neurodiverse learners, including those with ADHD, autism, and dyslexia. However,
inclusion is not automatic. Poorly designed AI systems may exacerbate cognitive
overload, surveillance anxiety, or deficit-based profiling (Cukurova et al.,
2020).
6.2 Wellbeing and
Cognitive Load
The
next phase of AI integration must prioritise learner and teacher wellbeing.
Research on cognitive load theory highlights the risk of overwhelming learners
with excessive information and choices (Sweller et al., 2019). AI systems
should therefore aim to reduce extraneous load and support executive
functioning, not intensify performance pressures.
Human-centred
AI design, co-created with diverse learners, is essential to ensure that
efficiency does not come at the cost of well-being.
7. Governance, Ethics, and
System-Level Change
7.1 From Policy to
Practice
Ethical
AI in education cannot be addressed solely through high-level policy
statements. It requires translation into everyday pedagogical decisions,
assessment practices, and institutional cultures (Floridi et al., 2018).
Key
ethical considerations include:
- Data privacy
and consent
- Transparency of
algorithms
- Accountability
for AI-driven decisions
- Clear
boundaries around appropriate AI use
7.2 Systemic
Transformation
AI will increasingly shape curriculum design, resource allocation, and early intervention systems through learning analytics and predictive modelling. While these developments offer opportunities for equity-focused support, they also risk entrenching deficit narratives if not critically examined (Williamson & Eynon, 2020). The next phase of the AI journey, therefore, requires systemic governance structures that balance innovation with care, efficiency with justice, and data with human insight.
8. Conclusion
The
next stage of the AI journey in education is not defined by more powerful
algorithms or faster adoption. Rather, it is characterised by a conceptual
shift: from AI as a technological solution to AI as a pedagogical, ethical, and
human challenge.
This
article argued that future-focused education systems must move beyond
instrumental uses of AI toward AI-informed pedagogy grounded in inclusion,
relational teaching, and critical literacy. Teachers remain central as
designers of learning, interpreters of data, and guardians of educational
values. Assessment must be reimagined to prioritise authentic learning and
transparency. AI literacy must become a foundational outcome, and well-being
must be positioned as a core design principle rather than an afterthought. Ultimately,
the measure of success in the next phase of AI in education will not be how
intelligently machines perform, but how thoughtfully humans learn, teach, and
live alongside them.
References
Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R.
(1995). Cognitive tutors: Lessons learned. The Journal of the Learning
Sciences, 4(2), 167–207.
Biesta, G. (2015). Good education in an age of measurement. Routledge.
Biesta, G. (2022). Why educational research should not just solve problems, but
should cause them as well. British Educational Research Journal, 48(1),
1–4.
Boud, D., & Falchikov, N. (2007). Rethinking assessment in higher
education. Routledge.
CAST. (2018). Universal design for learning guidelines version 2.2.
Cukurova, M., Luckin, R., & Holmes, W. (2020). Artificial intelligence in
education: The three grand challenges. British Journal of Educational
Technology, 51(6), 2143–2157.
Eaton, S. E. (2023). Postplagiarism: Transcending the binary of plagiarism and
integrity. International Journal for Educational Integrity, 19(1).
Floridi, L., et al. (2018). AI4People—An ethical framework for a good AI
society. Minds and Machines, 28(4), 689–707.
Fullan, M., Quinn, J., Drummy, M., & Gardner, M. (2020). Education
reimagined: The future of learning. New Pedagogies for Deep Learning.
Holmes, W., Bialik, M., & Fadel, C. (2022). Artificial intelligence in
education: Promises and implications for teaching and learning. Center for
Curriculum Redesign.
Kirschner, P. A., & De Bruyckere, P. (2017). The myths of the digital
native and the multitasker. Teaching and Teacher Education, 67, 135–142.
Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence
unleashed: An argument for AI in education. Pearson.
Mollick, E., & Mollick, L. (2023). Using AI to implement effective teaching
strategies. Harvard Business Publishing Education.
Ng, D. T. K., et al. (2021). AI literacy: Definition, teaching, evaluation and
ethical issues. Computers and Education: Artificial Intelligence, 2.
Noble, S. U. (2018). Algorithms of oppression. NYU Press.
OECD. (2021). AI in education: Challenges and opportunities. OECD
Publishing.
Selwyn, N. (2019). Should robots replace teachers?. Polity Press.
Sweller, J., Ayres, P., & Kalyuga, S. (2019). Cognitive load theory.
Springer.
Vygotsky, L. S. (1978). Mind in society. Harvard University Press.
Wiggins, G. (1998). Educative assessment. Jossey-Bass.
Williamson, B. (2017). Big data in education. Sage.
Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and
future directions in AI in education. Learning, Media and Technology, 45(3),
223–235.



Comments
Post a Comment