What Kind of Educational Reality Do We Seek to Create in an Age of Intelligent Machines?

 


Introduction

Artificial intelligence (AI) has rapidly transitioned from the periphery of educational innovation to the forefront of contemporary debates about teaching, learning, assessment, and governance. Intelligent systems curate content, personalise learning pathways, automate feedback, generate text, and increasingly mediate interactions among learners, educators, and institutions. Although much of the discourse on AI in education emphasises efficiency, scalability, and performance optimisation, these perspectives risk obscuring foundational questions regarding the purposes of education. As intelligent machines assume functions traditionally associated with human cognition, education confronts not only technological challenges but also significant ontological, epistemological, and ethical considerations.

This essay contends that the educational reality appropriate to an age of intelligent machines should neither be oriented toward competition with AI nor defined by technological determinism. Rather, it advances a theoretical framework that conceptualises education as a relational, entangled, and ethical practice. Drawing on critical pedagogy, posthumanist theory, and critical AI literacy, the essay posits that AI should be regarded not as a neutral tool or inevitable solution, but as a socio-technical actor embedded within power relations, value systems, and institutional logics. From this standpoint, education must resist reductive narratives of optimisation and instead prioritise meaning-making, inclusion, critical agency, and ethical becoming.

Beyond Knowledge Transmission: Reframing the Purpose of Education

For much of modern educational history, schooling has been justified through the logic of knowledge transmission. Curricula have been organised around disciplinary content, teachers positioned as authoritative sources of knowledge, and learners assessed on their ability to recall and reproduce information. Conditions of informational scarcity historically underpinned these assumptions: access to knowledge was limited, expertise was concentrated, and learning institutions functioned as primary gateways to intellectual resources.

AI fundamentally disrupts these historical conditions. Intelligent systems can retrieve, summarise, translate, and generate knowledge at scale, thereby rendering traditional content-delivery pedagogies increasingly obsolete. If machines can perform these functions more efficiently than humans, the central question shifts from how education can incorporate AI to accelerate existing practices to whether such practices remain educationally justifiable.

Critical pedagogy provides a valuable framework for interrogating this transformation. Freire’s (1970) critique of the “banking model” of education remains highly pertinent: when learners are positioned as passive recipients of deposited knowledge, education perpetuates domination rather than fostering emancipation. In an AI-saturated environment, the banking model risks becoming fully automated, reducing learners to data points within algorithmic systems optimised for measurable outcomes. Instead, the educational reality to be pursued should foreground learning as meaning-making, conceptualised as an active, interpretive, and socially situated process through which learners engage with knowledge in relation to their lived experiences and broader socio-political contexts.

These reframing positions education not as the transmission of answers, but as the cultivation of critical inquiry. Questions such as: Why does this knowledge matter? Whose interests does it serve? How is it produced, legitimised, and contested? become central. In an era where AI can generate plausible answers instantaneously, such questions constitute the distinctive domain of human education.

Posthumanism and the Concept of Entangled Intelligence

Humanist educational paradigms have traditionally assumed a bounded, autonomous learner whose cognition resides within the individual mind. AI disrupts this assumption by exposing the extent to which tools, technologies, languages, and social infrastructures have always mediated learning. Posthumanist theory provides a conceptual framework for understanding this disruption not as a loss of humanity, but as an opportunity to reconceptualise learning as fundamentally relational and entangled.

Drawing on Barad’s (2007) concept of entanglement, this essay conceptualises intelligence as distributed across human and nonhuman actors, including algorithms, interfaces, institutional policies, and material environments. AI is not simply an external aid to cognition, but an active participant in the production of knowledge, shaping what can be known, how it is represented, and who is authorised to know. From this perspective, learning emerges through intra-actions among humans and machines, rather than as an exclusively human accomplishment.

This posthuman perspective challenges instrumental framing of AI as a neutral tool to be mastered, instead of foregrounding questions of agency, responsibility, and accountability. If AI systems co-produce educational realities, ethical responsibility must extend beyond individual learners or teachers to include designers, policymakers, institutions, and the socio-economic logics that shape educational technologies.

An entangled view of intelligence does not diminish human agency; instead, it situates agency within complex assemblages that demand critical awareness and reflexivity. Education thus becomes a site for learning to live and act responsibly within human–machine ecologies.

Standardisation, Personalisation, and the Politics of Difference

One of the most frequently cited promises of AI in education is personalisation. Adaptive learning systems claim to tailor instruction to individual needs, learning styles, and performance levels, making education more inclusive and responsive. However, critical scholarship cautions that personalisation often operates through deeper standardisation, relying on normative models of the “ideal learner” encoded within algorithms.

Benjamin (2019) demonstrates that technological systems frequently reproduce and amplify existing inequalities by embedding racialised, ableist, and deficit-oriented assumptions into their design. In educational contexts, these dynamics risk pathologising differences, particularly for neurodiverse learners who are cognitive and learning styles may not align with algorithmic norms. Instead of expanding possibilities, AI-driven personalisation can constrain learners’ trajectories, subtly directing them toward predefined outcomes considered efficient or desirable by institutional metrics.

The educational reality to be pursued must therefore place inclusion at its theoretical core, rather than treating it as a technical feature. From an inclusive and neurodiversity-affirming perspective, difference is not a problem to be solved but an epistemic resource that enriches collective learning. This approach necessitates resisting deficit-based analytics and embracing plural forms of intelligence, expression, and participation.

In practice, this entails designing educational systems in which AI adapts to learners, rather than requiring learners to conform to AI. It also involves preserving spaces for ambiguity, creativity, and non-linearity, qualities that are challenging to quantify yet essential to inclusive education.

Critical AI Literacy as a Democratic Imperative

As AI becomes increasingly embedded within educational infrastructures, the capacity to use intelligent systems is no longer sufficient. Learners must also develop the ability to critically interrogate how these systems function, whose interests they serve, and what kinds of futures they enable or constrain. Critical AI literacy extends beyond technical competence to encompass the ethical, political, and socio-cultural dimensions of AI.

Pangrazio and Selwyn (2023) argue that critical AI literacy involves understanding issues such as datafication, surveillance, bias, opacity, and power. In educational contexts, this requires enabling learners to question algorithmic decision-making processes that influence assessment, progression, and access to opportunities. Absent such critical awareness, education risks devolving into compliance training, preparing learners to adapt uncritically to technological systems rather than empowering them to shape those systems democratically.

From this theoretical perspective, critical AI literacy is not an optional supplement or specialist skill. It constitutes a foundational component of contemporary education, comparable to critical media literacy in previous technological eras. Critical AI literacy equips learners not only to use AI, but also to resist, redesign, and reimagine it in accordance with ethical and social values.

Education, Ethics, and the Question of Becoming

Much policy discourse frames education in terms of future readiness, emphasising the preparation of learners for jobs that do not yet exist in an economy transformed by automation. While these concerns are not insignificant, they risk reducing education to a form of human capital development, subordinated to market logics and economic competitiveness.

This essay adopts an alternative ethical orientation, conceptualising education as a process of becoming rather than mere preparation. Drawing on Biesta (2015), education is understood as concerned not only with qualification (what learners can do) or socialisation (how they fit into existing systems), but also with subjectification: who learners become ethical, relational beings.

In an age of intelligent machines, this ethical dimension becomes increasingly salient. As AI systems shape decision-making, communication, and social relations, education must address how learners understand their responsibilities toward others, both human and nonhuman. This includes cultivating care, humility, moral imagination, and the capacity to navigate uncertainty and complexity.

This orientation resists the impulse to optimise education according to narrow performance metrics. Instead, it affirms the intrinsic value of education as a space for ethical reflection, relational engagement, and democratic possibility.

 

Resisting Technocratic and Neoliberal Narratives

Many contemporary AI-in-education initiatives are underpinned by a technocratic logic that prioritises efficiency, scalability, and return on investment. Within corporate and market-driven educational systems, AI is often positioned as a solution to perceived inefficiencies in teaching and learning, promising cost reductions, and standardised quality control.

The theoretical positioning advanced in this essay explicitly resists such narratives. While acknowledging the material realities of educational systems, it contends that an uncritical embrace of AI risks subordinating educational values to market imperatives. When efficiency becomes the dominant criterion, care, inclusion, and ethical deliberation are frequently marginalised.

A critical, posthuman approach maintains that education cannot be reduced to optimisation problems without forfeiting its moral and democratic significance. The educational reality to be pursued is therefore one that remains attentive to power, values, and purposes, particularly in the context of technological innovation.

Conclusion: Toward a Relational and Ethical Educational Reality

The integration of intelligent machines into education presents multiple, contested possibilities rather than a singular future. This essay argued that the educational reality to be pursued should be grounded in relationality, critical engagement, and ethical responsibility. Drawing on critical pedagogy, posthumanism, and critical AI literacy, it positions education as a site of meaning-making rather than content delivery, of entangled intelligence rather than isolated cognition, and of inclusive becoming rather than standardised performance.

Within this framework, AI is neither a saviour nor a threat, but a provocation that demands renewed attention to the values underpinning educational systems. The central challenge is not merely how to integrate intelligent machines into education, but how to ensure that education remains oriented toward human and planetary flourishing in a world increasingly shaped by nonhuman intelligence.

References

Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.

Biesta, G. (2015). Good education in an age of measurement: Ethics, politics, democracy. Routledge.

Braidotti, R. (2019). Posthuman knowledge. Polity Press.

Freire, P. (1970). Pedagogy of the oppressed. Continuum.

Giroux, H. A. (2011). On critical pedagogy. Bloomsbury.

Pangrazio, L., & Selwyn, N. (2023). Towards a school-based critical AI literacy: Theoretical perspectives and practical possibilities. Learning, Media and Technology, 48(2), 1–14.

Williamson, B. (2017). Big data in education: The digital future of learning, policy and practice. Sage.

 

 

Comments