Opening the Black Box: Reframing Artificial Intelligence as a Pedagogical Object in Contemporary Learning Environments

 


Abstract

The swift incorporation of artificial intelligence (AI) into education has elicited extensive institutional reactions; nevertheless, a significant portion of the current discourse continues to emphasise tool adoption, efficiency, and academic integrity. These approaches are inadequate because they fail to address AI as a sociotechnical system that transforms knowledge, pedagogy, and learner agency. Adopting a critical interpretivist perspective, this article reconceptualises AI as a pedagogical object of inquiry rather than solely an instructional tool. A heuristic framework is proposed, comprising three interrelated dimensions: epistemic understanding, cognitive partnership, and ethical interrogation. Each dimension is grounded in established theoretical traditions, including epistemic cognition, distributed cognition, and critical digital pedagogy. The analysis integrates emerging empirical insights from classroom practice and examines implications for neurodiverse learners. Additionally, AI is situated within broader political-economic dynamics, with attention to its role in data extraction and platform governance. The article concludes that meaningful engagement with AI requires a shift from instrumental use toward critical, reflective, and contextually responsive pedagogies.

Introduction

The emergence of generative artificial intelligence in education, particularly following the widespread availability of large language models, has intensified debates regarding the future of teaching and learning. Schools and universities have responded in diverse ways, ranging from outright bans to rapid integration into curricula. However, much of this response has been reactive and tool-focused, centering on questions of access, assessment, and academic integrity.

A pedagogical account of what it means to learn with and about AI remains underdeveloped. In many classrooms, AI is either treated as a productivity tool or positioned as a threat to authentic learning. Both framings obscure a more fundamental issue: AI is not simply a tool, but a system that actively shapes how knowledge is produced, represented, and evaluated.

This article adopts a different starting point, contending that AI should be understood as a pedagogical object of inquiry—an entity that learners must actively investigate, interpret, and critique. This shift has significant implications for classroom practice, teacher roles, and educational aims. It also raises important questions about inclusion, particularly for neurodiverse learners whose interactions with AI may differ in meaningful ways.

Research Questions

This article is guided by the following questions:

  1. What forms of epistemic understanding are required for learners to critically engage with AI systems?
  2. How can AI be integrated into classroom practice as a form of cognitive partnership without diminishing intellectual effort?
  3. How do ethical and sociotechnical considerations—including issues of bias, data extraction, and inclusion—reshape the use of AI in learning environments?

 

Theoretical Positioning

AI as Sociotechnical Infrastructure

AI in education is best understood as part of a broader sociotechnical infrastructure, in which technical systems, institutional practices, and human actors are mutually constitutive. As Ben Williamson argues, data-driven systems increasingly shape educational governance, producing new forms of visibility, accountability, and control.

This perspective moves beyond viewing AI as a neutral instructional aid. Instead, it highlights how AI systems:

  • Encode assumptions about knowledge
  • Privilege certain forms of data
  • Influence pedagogical decision-making

From Tool to Pedagogical Object

Existing approaches tend to frame AI as a tool to be used. This article proposes a shift toward treating AI as an object to be understood. This perspective aligns with traditions in critical digital pedagogy that emphasise learner agency, reflexivity, and the interrogation of technological systems.

This repositioning is crucial. If AI remains invisible as a system, learners engage only with its outputs. If it becomes an object of inquiry, learners can engage with its underlying structures and implications.

A Heuristic Framework for AI Pedagogy

To operationalise this shift, a three-part heuristic framework is proposed. This framework is not intended as an exhaustive model, but rather as a conceptual tool to guide pedagogical practice.

1. Epistemic Understanding: Making AI Knowable

This dimension draws on theories of epistemic cognition, which explore how individuals understand knowledge and its production. In the context of AI, learners must grapple with the fact that outputs are generated through statistical pattern recognition rather than human-like understanding.

Most contemporary AI systems are based on machine learning, where models are trained on large datasets to predict likely sequences of text or actions. Without this understanding, learners may attribute undue authority to AI outputs.

In practice, developing epistemic understanding involves:

  • Exploring how prompts shape responses
  • Identifying inconsistencies and errors
  • Recognising the role of training data

For example, in a secondary classroom, students who are asked to generate historical explanations using AI often notice that slight changes in phrasing can produce significantly different interpretations. Discussing these variations opens up space to examine how knowledge is constructed.

For neurodiverse learners, this process may require additional scaffolding. Explicitly mapping input–output relationships and visualising processes can support comprehension, particularly for learners who benefit from structured representations.

2. Cognitive Partnership: Learning With AI

The second dimension conceptualises AI as part of a distributed cognitive system. Drawing on theories of distributed cognition, learning is understood as emerging through interaction between individuals and tools.

Here, AI functions not as a replacement for thinking, but as a cognitive partner that can:

  • Generate alternative perspectives
  • Provide iterative feedback
  • Supporting idea development

However, this partnership is not inherently beneficial. Poorly designed tasks can reduce cognitive demand, encouraging students to outsource thinking. The pedagogical challenge is to structure interactions so that AI amplifies rather than replaces cognition.

Consider a classroom task in which students:

  1. Use AI to generate an argument.
  2. Critique its assumptions
  3. Revise it based on their own reasoning.

In this sequence, AI becomes a starting point for deeper engagement rather than an endpoint.

For neurodiverse learners, cognitive partnership can be particularly valuable. AI tools can:

  • Support language processing
  • Provide alternative explanations
  • Offer low-pressure feedback

Yet these benefits depend on careful mediation. Over-reliance may limit the development of independent strategies, while poorly calibrated outputs may introduce confusion rather than clarity.

3. Ethical Interrogation: Questioning AI Systems

The third dimension foregrounds the ethical and political dimensions of AI. This draws on scholarship in data ethics, which highlights issues of bias, accountability, and power.

AI systems are shaped by:

  • The data on which they are trained
  • The assumptions embedded in their design
  • The interests of organisations that develop them

In educational contexts, this raises critical questions:

  • Whose knowledge is represented in AI outputs?
  • How is student data collected and used?
  • What forms of bias are reproduced?

For example, students examining AI-generated content on global issues may notice a predominance of Western perspectives. Such observations can serve as a basis for discussions of representation and epistemic justice.

Importantly, ethical interrogation also involves recognising AI as part of a political economy. Educational AI systems are often embedded within commercial platforms that rely on data extraction and user engagement. This dimension is frequently overlooked in classroom discussions but is central to understanding the broader implications of AI adoption.

Integrating Neurodiversity Across the Framework

Rather than treating neurodiversity as a separate consideration, this article positions it as integral to all three dimensions.

  • Epistemic understanding: Different learners may conceptualise AI processes in distinct ways, requiring varied representations and explanations.
  • Cognitive partnership: AI can provide tailored support, but must be used in ways that promote autonomy rather than dependency.
  • Ethical interrogation: Neurodiverse perspectives are essential for identifying biases and limitations in AI systems, particularly those grounded in normative assumptions about cognition.

This integrated approach avoids deficit framing and instead recognises neurodiversity as a source of insight into how AI systems function and fail.

Implications for Teacher Practice

Reframing AI as a pedagogical object has significant implications for teachers. Rather than focusing solely on tool adoption, educators are required to:

  • Facilitate inquiry into AI systems.
  • Design tasks that sustain cognitive engagement
  • Support ethical reflection
  • Adapt approaches for diverse learners.

This does not diminish the teacher's role; it intensifies it. Teachers become mediators of complex sociotechnical environments, drawing on both pedagogical expertise and emerging forms of digital literacy.

However, this role is shaped by broader structural conditions. As AI becomes embedded within educational platforms, teachers may face increasing pressure to align with data-driven systems. Maintaining professional agency in this context is a key challenge.

Discussion: From Instrumental Use to Critical Engagement

The analysis suggests that current approaches to AI in education remain limited by an instrumental focus on efficiency and control. While these concerns are valid, they do not address the deeper transformations introduced by AI.

Reframing AI as a pedagogical object shifts attention toward:

  • Understanding usage
  • Inquiry over compliance
  • Reflection on automation

This shift is particularly important in relation to equity. Without critical engagement, AI risks reinforcing existing inequalities through biased data and uneven access. With it, learners can develop the capacity to question and reshape these systems.

Conclusion

AI is not merely entering education; it is reshaping its foundations. The challenge for educators is not only to incorporate new tools, but also to reconsider what it means to know, to learn, and to teach in AI-rich environments.

A framework for addressing this challenge has been proposed, grounded in epistemic understanding, cognitive partnership, and ethical interrogation. By positioning AI as a pedagogical object of inquiry, this approach enables movement beyond reactive responses toward more thoughtful, inclusive, and critically informed practices.

The central question is no longer whether students will use AI, but whether they will understand it and whether education will equip them for this understanding.

References

Holmes, W., Bialik, M., & Fadel, C. (2022). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.

Luckin, R. (2022). AI for school teachers. Routledge.

Selwyn, N. (2021). Education and technology: Key issues and debates (3rd ed.). Bloomsbury.

Williamson, B. (2023). Big data in education: The digital future of learning, policy and practice (2nd ed.). Sage.

Comments