Artificial Intelligence and University Applications: Authenticity, Equity, and the Reconfiguration of Admissions Practices
Abstract
The integration of artificial
intelligence (AI) into higher education has significantly reshaped university
admissions processes. Applicants increasingly use generative AI tools such as
ChatGPT to construct personal statements, while institutions deploy algorithmic
systems to manage large applicant pools. This paper investigates the
implications of AI-mediated admissions through a qualitative interpretivist
framework. Drawing on document analysis of institutional policies, emerging
empirical studies (2020–2026), and critical synthesis, the study examines how
AI transforms notions of authenticity, equity, and evaluative validity.
Findings indicate that AI simultaneously democratizes access to application
support while destabilising traditional markers of merit. The paper argues for
a shift toward process-oriented and dialogic admissions models and proposes a
framework for critical AI admissions literacy. The study contributes to ongoing
debates in educational technology and critical pedagogy by foregrounding the
sociotechnical and ethical dimensions of AI in high-stakes selection systems.
Keywords
Artificial intelligence, university
admissions, higher education, equity, authenticity, critical digital pedagogy,
generative AI
1. Introduction
Historically, university admissions
systems have operated as gatekeeping mechanisms, balancing meritocratic ideals
with institutional priorities. These systems have traditionally relied on
academic metrics, standardized testing, and qualitative components such as
personal statements. The emergence of generative AI has disrupted these
conventions, necessitating an examination of how established practices
intersect with new technological influences.
AI-powered tools enable applicants to
generate sophisticated written materials, raising critical questions regarding
authorship, originality, and fairness. Concurrently, universities are
implementing AI-driven systems for application screening and predictive
analytics, thereby embedding algorithmic decision-making more deeply into
admissions processes.
This paper addresses the central
research question:
How is artificial intelligence
reshaping authenticity, equity, and evaluative practices in university
admissions?
Through a qualitative interpretivist
approach, this study situates AI within broader sociotechnical transformations
and emphasizes the dynamic interplay among technology, institutional practices,
and human agency.
2. Literature Review
2.1 Generative AI and
Academic Writing
Recent scholarship underscores the
transformative impact of generative AI on academic writing practices. Evidence
indicates that AI tools can improve clarity, coherence, and accessibility,
particularly for students with limited academic support (Cheng et al., 2025).
Nevertheless, persistent concerns regarding authorship and originality remain.
A systematic review by RSIS
International (2025) found that most studies identify risks related to
plagiarism, ethical misuse, and reduced critical engagement. These concerns are
particularly pronounced in high-stakes contexts such as university applications,
where written submissions are decisive.
2.2 AI and Admissions
Practices
Institutions are increasingly
incorporating AI into admissions workflows. Algorithmic systems are used to
filter applications, predict student success, and optimise recruitment
strategies (Marín, 2025). While these systems improve efficiency, they also introduce
risks related to bias and transparency.
Research demonstrates that AI models
trained on historical data can perpetuate existing inequalities,
disproportionately affecting marginalized groups (Llerena-Izquierdo &
Ayala-Carabajo, 2025). This situation raises critical questions regarding fairness
and accountability in AI-mediated decision-making.
2.3 Authenticity and
the Crisis of the Personal Statement
The personal statement has
traditionally served as a medium for authentic self-expression. The
proliferation of AI-generated writing, however, challenges this assumption.
Cournoyea (2025) contends that the essay format is becoming increasingly
unreliable as an indicator of individual capability.
Consequently, scholars recommend
alternative assessment methods that prioritize process, interaction, and
real-time evaluation.
2.4 Critical Digital
Pedagogy
The study is grounded in critical
digital pedagogy, which extends Paulo Freire's work into digital contexts. This
framework emphasises:
- Power relations
in technological systems
- The importance
of agency and voice
- The need for
ethical and equitable practice
AI in admissions should therefore be
understood not only as a technical innovation but also as a sociocultural
phenomenon that reshapes educational access and identity.
3. Methodology
3.1 Research Design
This study adopts a qualitative
interpretivist research design, suitable for exploring complex sociotechnical
phenomena. Interpretivism prioritises understanding how individuals and
institutions construct meaning, making it particularly relevant for examining
AI in admissions.
The research employs a critical
document analysis (CDA) approach, combined with thematic synthesis, to analyse
how AI is represented and operationalised in university admissions contexts.
3.2 Data Sources
The study draws on three primary data
sources:
1. Institutional
Policy Documents
Policies and guidance from major
admissions systems, including:
- UCAS
- Common
Application
These documents provide insight into
official positions on AI usage.
2. Peer-Reviewed
Literature (2020–2026)
A corpus of recent academic studies
on:
- Generative AI
in education
- AI ethics and
governance
- Admissions
practices and equity
Databases included Scopus, Web of
Science, and Google Scholar.
3. Grey Literature
Reports, policy briefs, and preprints
(e.g., arXiv) were included to capture emerging trends not yet fully
represented in peer-reviewed journals.
3.3 Sampling Strategy
A purposive sampling strategy was
employed to select sources that:
- Directly
address AI in education or admissions.
- They are
published between 2020 and 2026
- Represent
diverse geographical contexts.
A total of 42 sources were included in
the final dataset.
3.4 Data Analysis
Data were analysed using reflexive
thematic analysis (Braun & Clarke, 2006), involving:
- Familiarisation
with data
- Initial coding
(open coding)
- Theme
development
- Theme
refinement
- Interpretation
within a critical framework
Themes were iteratively developed and
reviewed to ensure coherence and analytical depth.
3.5 Trustworthiness
and Rigour
To ensure rigour, the study applied:
- Credibility: Triangulation across data
sources
- Dependability: Transparent documentation of
methods
- Reflexivity: Acknowledgement of researcher
positionality
- Transferability: Thick description of contexts
3.6 Ethical Considerations
The study relies exclusively on
publicly available data and does not involve human participants. However,
ethical considerations include:
- Responsible
representation of institutional policies
- Critical
engagement with power and bias in AI systems
4. Findings
4.1 AI as a
Democratizing Tool
AI tools offer scalable support for
applicants, particularly those from historically underserved groups, and have
the potential to reduce persistent inequities in application preparation.
However, the findings indicate that
access to AI alone does not ensure equity. Variations in AI literacy, available
resources, and strategic utilization may reinforce or exacerbate existing
inequities among applicants.
4.2 The Erosion of
Authenticity
The widespread use of AI in writing
diminishes the reliability of personal statements as indicators of individual
capability. Admissions systems increasingly struggle to differentiate between
human and AI-generated content.
4.3 Algorithmic Bias
and Institutional Risk
AI systems employed in admissions can
replicate historical biases embedded in training data. This situation creates
several risks for institutions, including:
- Legal
challenges
- Reputational
damage
- Ethical
violations
4.4 The Shift Toward
Process-Oriented Evaluation
Institutions are beginning to move
away from static written submissions toward:
- Interviews
- Portfolios
- Timed
assessments
These approaches are designed to
capture authentic student capabilities within environments characterized by
pervasive AI use.
5. Discussion
The findings reveal a fundamental
tension between efficiency and authenticity within AI-mediated admissions
processes.
From a critical digital pedagogy
perspective, AI both empowers and constraints:
- It expands
access to resources.
- It reshapes how
identity and merit are constructed.
This study contends that admissions
systems should move beyond simplistic conceptions of “AI misuse” and adopt more
nuanced understandings of human-AI collaboration.
6. Implications
6.1 For Policy
- Clear
guidelines on ethical AI use
- Transparency in
algorithmic decision-making
6.2 For Practice
- Redesign of
admissions processes
- Increased use
of interactive assessments
6.3 For Research
- Empirical
studies on AI impact in admissions
- Focus on
marginalised and neurodiverse learners.
AI is profoundly transforming
university admissions, challenging traditional assumptions regarding merit,
authorship, and fairness. Although it presents opportunities for increased
access and efficiency, it also introduces significant ethical and practical
challenges.
The future of admissions depends on
reimagining evaluation systems to align with the realities of AI-mediated
learning and communication. Through a critical and reflective approach,
institutions can leverage AI’s potential while safeguarding equity and authenticity.
References
Braun, V., & Clarke, V. (2006).
Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2),
77–101.
Cheng, A., et al. (2025). Artificial
intelligence-assisted academic writing. Journal of Healthcare Simulation
Research.
Cournoyea, M. (2025). Rethinking the
personal statement in the AI era. Academic Medicine.
Gonsalves, C. (2025). Addressing
student non-compliance in AI use declarations. Assessment & Evaluation
in Higher Education.
Jeon, J. (2025). The ethics of
generative AI in social science research. Technology in Society.
Lee, J., Borchers, C., Alvero, A. J.,
Joachims, T., & Kizilcec, R. F. (2026). The digital divide in generative
AI. arXiv preprint.
Llerena-Izquierdo, J., &
Ayala-Carabajo, R. (2025). Ethics of AI in academia. Informatics, 12(4),
111.
Marín, Y. R. (2025). Ethical
challenges associated with AI in universities. Journal of Academic Ethics.
RSIS International. (2025). Ethical
use of AI in academic writing: A systematic review.



Comments
Post a Comment