
Human Agency in AI-Era Assessment
By Naomi Rowan, Founder & Consultant, Gratitude Worldwide Ltd Published: 10 May 2026
Protecting evidence of learning, human judgement and staff capacity
In brief
AI has made assessment feel more uncertain, but it has not created every assessment problem universities are now facing.
In many cases, AI is making older questions harder to ignore: what assessment is really trying to evidence, how students show judgement, where staff time is being lost, how feedback can remain meaningful, and whether current workflows are strong enough to support trust at scale.
The response does not need to be panic redesign. Nor does it need to be a retreat into suspicion, surveillance or blanket restriction.
A more useful starting point is to ask:
What human effort still matters here, and how do we design assessment, feedback and workflow so that effort is visible, supported and sustainable?
AI is not only an academic-integrity issue
Academic integrity is essential. Universities need fair processes, clear expectations and careful guidance about AI use.
But if the conversation stays only at the level of detection, misconduct or acceptable-use policy, it can miss the deeper assessment questions.
AI asks universities to look again at:
-
how students demonstrate understanding;
-
where judgement, interpretation and decision-making appear in the assessment;
-
what kinds of evidence are strong enough to support academic standards;
-
how staff can mark, moderate and give feedback without unsustainable workload;
-
where technology can reduce avoidable friction;
-
where human effort should remain central;
-
how students are guided to use, question or avoid AI appropriately;
-
how trust is maintained between students, staff, departments and institutional systems.
This is why AI-era assessment work needs to connect design, workflow, policy, platform use and staff adoption. Treated separately, each part can look manageable. In practice, they affect one another.
The question is not simply “Can students use AI?”
This question is important, but it is not enough.
A stronger set of questions might be:
-
What is this assessment trying to evidence?
-
Is it assessing knowledge, analysis, judgement, process, performance, creativity, professional reasoning, disciplinary method, communication or something else?
-
-
Where does the student need to make decisions?
-
Can those decisions be seen, explained, reflected on or discussed?
-
-
What kind of support is legitimate?
-
Can students use AI for brainstorming, planning, editing, feedback, simulation or critique? Where would that support become inappropriate?
-
-
What should remain human?
-
Where does the work require personal judgement, ethical reasoning, disciplinary interpretation, care, dialogue or accountability?
-
-
What does the workflow need to support?
-
How will submission, marking, moderation, feedback, grade handling, exceptions, declarations or evidence trails actually work?
-
-
What would make this sustainable for staff?
-
A stronger assessment design can still fail if it creates unmanageable marking, unclear moderation or extra manual work.
-
These questions help move the conversation from anxiety into design.
Human agency means students are not only producing outputs
In an AI-era assessment context, human agency means students have meaningful opportunities to think, choose, judge, explain, revise, respond and take responsibility for their work.
That does not mean every assessment must become highly complex or fully authentic. Some existing essays, exams, problem sets, performances, presentations, portfolios and practical tasks may still be appropriate.
The work is to understand whether the assessment still gives students a fair and meaningful way to show learning.
Human agency may become more visible through:
-
explanation of choices made during the work;
-
staged drafts or checkpoints;
-
reflective commentary;
-
feedback response;
-
oral discussion or viva-style elements;
-
process evidence;
-
applied or localised tasks;
-
discipline-specific judgement;
-
professional or ethical reasoning;
-
collaborative decision-making;
-
appropriate use, critique or non-use of AI.
None of these approaches is a universal answer. Each brings workload, equity, accessibility, moderation and workflow implications. The aim isn't to add unnecessary complexity, but to make the right kind of learning visible.
Evidence of learning needs to be designed, not assumed
Before AI, many assessments relied heavily on the final submitted product. In some contexts, that may still be enough. In others, it may no longer provide strong enough evidence on its own.
A useful review asks:
-
What evidence does this task currently produce?
-
What evidence is missing?
-
What evidence would be stronger, fairer or more educationally useful?
-
Who will interpret that evidence?
-
How will staff be supported to make consistent judgements?
-
How will students understand what is expected?
-
What will happen when the evidence is ambiguous?
An AI-writing indicator, similarity report, process log, draft history, oral explanation or reflective statement can all provide different kinds of information.
None of them removes the need for academic judgement.
The risk is not only that students use AI inappropriately, but also that institutions gather more signals than they can interpret fairly, or introduce tools without clear processes around them.
Strong evidence of learning is not only a technical question. It is an educational, operational and ethical one.
Staff workload is part of assessment quality
AI-era assessment conversations often focus on student behaviour. Staff capacity needs equal attention.
A redesigned assessment that protects integrity but doubles the marking burden will not be sustainable. A tool that promises efficiency but creates new checking, exception-handling or communication work may not reduce workload in practice. A policy that looks clear centrally may still leave staff unsure what to do in day-to-day marking, moderation or student guidance.
This is why assessment change needs workflow thinking.
Universities need to understand:
-
where staff time is currently being lost;
-
which processes rely on local memory or workarounds;
-
where Moodle, Canvas or another platform is supporting the process well;
-
where the platform is being blamed for unclear process;
-
what guidance staff need;
-
what decisions should be standardised;
-
where academic judgement should remain flexible;
-
what support is needed before any new tool or approach is rolled out.
Protecting human agency includes protecting staff capacity. Staff cannot exercise careful judgement if the process around them is unclear, fragmented or overloaded.
Where AI should reduce friction
AI and assessment technologies can be useful where they reduce avoidable friction.
That may include:
-
summarising themes from feedback or evaluation data;
-
supporting staff with first-draft guidance materials;
-
helping students practise with formative questions;
-
improving accessibility of instructions;
-
supporting workflow mapping or documentation;
-
helping teams compare policy options;
-
reducing repetitive administrative drafting;
-
supporting structured feedback processes;
-
helping staff identify patterns that need human review.
In these cases, AI can help make the right work easier.
The test is not whether AI can do something. The test is whether it improves the educational process without weakening judgement, trust, fairness or accountability.
Where human effort needs to remain
Some forms of effort should not be removed simply because they are difficult.
Assessment still needs human judgement. Feedback still needs educational purpose. Moderation still needs careful interpretation. Students still need to develop their own understanding, voice and responsibility. Staff still need space to make contextual decisions.
Human effort needs to remain where the work involves:
-
academic standards;
-
disciplinary judgement;
-
ethical reasoning;
-
interpretation of ambiguous evidence;
-
student support;
-
feedback that helps learning;
-
moderation and fairness;
-
decisions with consequences for students;
-
care, dialogue and accountability.
The aim is not to protect every old process. Some existing processes are unnecessarily heavy, inconsistent or unclear.
The aim is to separate wasted friction from useful effort.
-
Wasted friction should be reduced.
-
Useful effort should be designed for, supported and made sustainable.
Trust is built through clarity, not certainty
AI has introduced more ambiguity into assessment, but universities do not need perfect certainty before they act.
They need clearer processes, better questions and more careful implementation.
Trust is strengthened when students understand what is expected, staff understand how to respond, policies translate into practice, platforms support the workflow, and decisions are made transparently and fairly.
That means universities may need to review:
-
assessment briefs;
-
AI-use guidance;
-
declaration processes;
-
feedback workflows;
-
marking and moderation arrangements;
-
student communication;
-
platform settings;
-
evidence trails;
-
escalation routes;
-
staff development;
-
pilot design and evaluation.
Trust does not come from one policy, one tool or one detection method. It comes from the alignment between assessment purpose, evidence, workflow and human judgement.
A practical review lens
When reviewing assessment in the AI era, I usually find it helpful to look across four connected areas.
1. Evidence
-
What does the assessment need to evidence, and is that evidence still strong enough?
-
This includes assessment design, student judgement, process visibility, feedback response and the role of any AI-use declaration or supporting evidence.
-
2. Workflow
-
How does the assessment actually move through the institution?
-
This includes submission, marking, moderation, feedback release, grade handling, exceptions, platform use, handoffs and staff guidance.
-
3. Trust
-
How are students, staff and institutional teams supported to make fair decisions?
-
This includes academic standards, transparency, consistency, student communication, human oversight and policy-to-practice alignment.
-
4. Capacity
-
Is the work sustainable?
-
This includes staff workload, training, support, local workarounds, digital processes, pilot planning and implementation capacity.
-
Looking across these areas helps universities avoid treating AI as a narrow policy problem or a simple tooling decision. It also helps identify the right next step: assessment redesign, workflow redesign, Moodle or platform support, staff guidance, a pilot, or a broader diagnostic.
How I can help
I support universities that are reviewing assessment, feedback, AI-era workflows, Moodle/platform processes and staff adoption.
This can include:
-
assessing whether current assessment tasks still provide strong evidence of learning;
-
identifying where student judgement and process could be made more visible;
-
reviewing marking, moderation and feedback workflows;
-
mapping where staff workload is being lost to avoidable friction;
-
clarifying where AI or assessment technology may help;
-
supporting pilot design and evaluation;
-
developing staff guidance, adoption materials or briefing papers;
-
helping teams move from broad AI concern into practical next steps.
Most work begins with a scoping conversation, a focused senior-leader briefing, or an Assessment & AI Workflow Diagnostic.
Make the right work easier to do
AI-era assessment is not only about preventing misuse. It is about designing assessment, feedback and workflows that protect learning, judgement, trust and staff capacity.
Universities do not need to solve every question at once.
A useful first step is to understand what is happening now: where the evidence is strong, where the workflow is fragile, where staff are carrying hidden workload, and where students need clearer guidance.
From there, the next step becomes easier to see.