Critical Thinking in AI-Mediated Classrooms: Pedagogical Strategies That Foster Epistemic Agency
Positioning AI Beyond
Instrumentalism
The rapid integration of generative
artificial intelligence (AI) into educational contexts has raised concerns
about the potential erosion of students’ critical thinking capacities (Facer,
2023; Selwyn, 2024). Institutional responses have typically oscillated between
prohibition and uncritical adoption, frequently framing AI as either a threat
to academic integrity or a neutral tool for efficiency. Both perspectives risk
reinforcing instrumentalist logics that prioritise output over cognition and
compliance over epistemic agency. It is argued here that critical thinking is
not inherently diminished by AI, but rather by pedagogical designs that
position AI as an authoritative knowledge source instead of an object of
critique.
Building on critical pedagogy (Freire,
1970), critical AI literacy (Ng et al., 2023), and interpretivist approaches to
learning, this section outlines classroom strategies grounded in empirical and
theoretical research to support critical thinking in AI-mediated environments.
These strategies reposition AI as a fallible, value-laden socio-technical
system, thereby foregrounding judgment, reflexivity, and meaning-making as
central educational outcomes.
AI as a Fallible
Cognitive Partner
An effective strategy for cultivating
critical thinking is to position AI as a fallible thinking partner rather than
an authoritative source of answers. In this approach, students are explicitly
tasked with interrogating AI-generated outputs by identifying assumptions,
omissions, inconsistencies, and potential biases. This pedagogical move
activates what Sperber et al. (2010) describe as epistemic vigilance, defined
as the capacity to evaluate the reliability and credibility of communicated
information.
Instead of requiring students to
produce original work despite AI, this strategy asks them to develop original
judgments about AI itself. Research indicates that evaluative and comparative
tasks engage higher-order cognitive processes more reliably than generative
tasks alone (Anderson & Krathwohl, 2001). Furthermore, by destabilising
AI’s epistemic authority, learners are encouraged to reclaim ownership of
knowledge construction, which aligns with Freire’s (1970) conception of
critical consciousness.
Prompt Deconstruction
and Metacognitive Awareness
Although “prompt engineering” is
increasingly recognised as a technical skill, prompt deconstruction provides
greater pedagogical value for fostering critical thinking. In this strategy,
students analyse how variations in prompt wording influence AI responses,
thereby revealing the interpretive and ideological dimensions of human–AI
interaction. By comparing neutral, value-laden, and ideologically framed
prompts, learners develop metacognitive awareness of how language structures
knowledge production.
This approach is consistent with
discourse-analytic traditions that emphasise language as constitutive rather
than merely descriptive (Fairclough, 2015). It also supports neurodiverse
learners by making implicit expectations and cognitive processes explicit,
which reduces reliance on tacit academic norms (Armstrong, 2017). Prompt
deconstruction, therefore, serves as both a critical literacy practice and an
inclusive design strategy.
AI-Supported
Counterfactual and Perspectival Reasoning
Critical thinking requires considering
alternative perspectives and evaluating competing value systems. AI can
facilitate this process by generating counterfactual explanations or
ideologically distinct interpretations of the same concept. For instance,
students may prompt AI to explain an educational issue from neoliberal,
humanistic, and critical pedagogical perspectives, and then critically analyse
the underlying assumptions and implications of each.
This strategy enhances perspectival
reasoning and ethical judgment by making ideological positions explicit rather
than implicit (Biesta, 2015). Importantly, the critical work is located not in
the AI-generated text itself but in students’ comparative analysis and
justificatory reasoning. Such tasks resist epistemic homogenisation and promote
pluralistic knowledge engagement, particularly in culturally diverse or
international educational settings.
Human-in-the-Loop
Assessment and Reflective Accountability
Assessment design is pivotal in
determining whether AI use undermines or enhances critical thinking.
Human-in-the-loop assessment models allow AI use during exploratory phases,
such as brainstorming, structuring, or clarification, while evaluating
students’ reflective decision-making processes rather than the final textual
product. Common assessment artefacts include decision logs, reflective
commentaries, and dialogic defences in which students explain how and why AI
suggestions were accepted, modified, or rejected.
This approach is consistent with interpretive
methodologies that prioritise meaning-making and situated understanding over
standardised outputs (Creswell & Poth, 2018). It also addresses inequities
associated with linguistic conformity, benefiting neurodiverse students and
multilingual learners who might otherwise be penalised for deviations from
dominant academic voice norms.
Designing for
Productive Friction
Critical thinking is frequently
catalysed by cognitive discomfort rather than seamless efficiency. Tasks that
intentionally expose AI’s limitations, such as those involving ethical
ambiguity, local contextual knowledge, or lived experience, create what is
termed productive friction (Friesen & Hug, 2009). In these tasks, students
are required to identify where AI responses fail, what forms of knowledge are
absent, and why human judgment remains indispensable.
This strategy challenges narratives of
technological solutionism by foregrounding the ontological limits of
computational systems (Knox, 2019). By recognising what AI cannot know or
represent, learners develop ontological humility and a more nuanced understanding
of knowledge as relational, contextual, and value-laden.
Bias Mapping and
Critical AI Literacy
Fostering critical thinking in
AI-mediated classrooms also requires explicit engagement with issues of bias,
representation, and power. Bias mapping activities prompt students to analyse
whose knowledge is foregrounded in AI outputs, whose perspectives are
marginalised, and which institutional or commercial interests are served.
Outputs may include positionality statements, ethical risk matrices, or visual
bias maps.
These practices are central to
critical AI literacy, which extends beyond functional competence to encompass
ethical, political, and epistemological dimensions of AI use (Ng et al., 2023).
In international and corporate schooling contexts, bias mapping is particularly
salient because AI systems often reproduce dominant Western, neoliberal, or
deficit-oriented narratives that conflict with inclusive educational aims.
Temporal Pedagogy:
Slowing Down AI Use
Research-informed practice indicates
that the timing of AI introduction is as important as its manner.
Temporal sequencing strategies, such as requiring students to think or write
independently before consulting AI, help preserve productive struggle and
prevent premature cognitive closure. Subsequent AI engagement thus becomes a
comparative rather than a substitutive process, reinforcing reflective
judgment.
This “slow AI” approach is consistent
with cognitive research on deep learning, which emphasises the importance of
effortful processing and delayed feedback (Kirschner et al., 2006). It also
challenges efficiency-driven educational cultures that equate speed with
intelligence.
Synthesis: Reclaiming
Critical Thinking in the Age of AI
Collectively, these strategies
indicate that critical thinking with AI arises not from technological
restriction or uncritical adoption, but from pedagogical designs that
reposition AI as an object of inquiry rather than an epistemic authority. When
students are encouraged to interrogate, contextualise, and ethically evaluate
AI outputs, they engage in higher-order thinking practices that are both
cognitively rigorous and socially responsive.
Within this framework, AI does not
displace human judgment but instead amplifies its necessity. Critical thinking, therefore, becomes less about resisting AI and more about cultivating epistemic agency in an increasingly automated knowledge landscape.
References
Anderson, L. W., & Krathwohl, D.
R. (2001). A taxonomy for learning, teaching, and assessing. Longman.
Armstrong, T. (2017). Neurodiversity in the classroom. ASCD.
Biesta, G. (2015). Good education in an age of measurement. Routledge.
Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry and research
design (4th ed.). Sage.
Facer, K. (2023). Learning futures in the age of AI. Routledge.
Fairclough, N. (2015). Language and power (3rd ed.). Routledge.
Freire, P. (1970). Pedagogy of the oppressed. Continuum.
Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance
during instruction does not work. Educational Psychologist, 41(2),
75–86.
Knox, J. (2019). What does the “postdigital” mean for education? Postdigital
Science and Education, 1(1), 1–17.
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2023).
Conceptualising AI literacy. Computers and Education: Artificial
Intelligence, 4, 100104.
Selwyn, N. (2024). Should robots replace teachers? Polity.



Comments
Post a Comment