What AI is making visible in assessment and feedback
- Naomi Rowan
- Apr 17
- 4 min read
Updated: 11 hours ago
This article discusses how AI highlights existing issues in assessment and feedback in higher education, emphasising the need for practical changes in design and workflows.
AI is not the whole problem in higher education. In many universities, it is making older assessment pressures harder to ignore.
That is not a criticism of universities. The pace of change has been fast, and many teams are trying to respond while also managing workload, student expectations, academic standards, digital systems and local variation.
The conversation can become too narrow.

Much of the conversation about AI in higher education still begins with fear, policy or a debate about whether AI should be allowed.
Those questions are important, but they are only part of the picture. They need to sit alongside the practical realities of assessment design, evidence of learning, marking and feedback workflows, staff confidence, student guidance, and the gap between written policy and day-to-day practice.
In practice, AI is putting pressure on areas that were already under strain. Universities need more than commentary about AI; they need practical work that helps them respond with care, clarity and enough structure to make change usable.
Sector context
This is not a fringe issue. Sector bodies are actively working through questions around AI, assessment, feedback, workload, and academic standards. Jisc’s AI assessment pilot is exploring how AI can support formative and summative assessment, with a particular focus on reducing marking and feedback burden without compromising quality. QAA has also published and curated resources to support the sector in engaging with generative AI while securing academic standards.
The practical challenge for institutions is turning that wider sector conversation into workable local change: assessment design, workflow review, staff guidance, training, and implementation.
What AI is making visible in assessment practices
In many institutions, AI has made older issues harder to ignore:
assessment tasks where the evidence of learning is no longer clear enough;
feedback and marking processes that were already inconsistent or difficult to sustain;
digital workflows that were already more complex than they needed to be;
Moodle or platform processes that rely on local workarounds;
uneven staff confidence in using systems, guidance and new tools;
student uncertainty about what is allowed, expected or valued;
institutional conversations that sit too far from operational reality.
Seen in that light, AI is not simply a new threat. It is also a stress test.
It is asking institutions to look more closely at how assessment and feedback actually work: what each task is trying to evidence, where friction sits, where judgement needs support, and what needs to change first.
Why practical change matters more than broad positioning
A high-level AI statement can be useful, and so can policy. But they only become meaningful when they connect to the practice of assessment itself: what students are being asked to evidence, how staff will make judgements, what guidance students need, and how the process will actually work.
Institutions also need to ask more practical questions:
What is this assessment trying to evidence?
Where are staff losing time?
Which assessment and feedback processes are creating unnecessary work?
Where are local workarounds masking deeper problems?
What do students need to understand about AI use, evidence and expectations?
Where does Moodle or another platform support the work, and where does it create avoidable friction?
Which forms of assessment and feedback genuinely help students learn?
These questions make the work more manageable because they move the conversation from abstract concern into practical action.
They also keep the educational purpose in view. They respond to AI while protecting meaningful evidence of learning with the added benefit of making assessment and feedback more workable for staff and students.
What a useful response can include
A useful institutional response to AI in assessment usually brings several strands together.
What is the task trying to evidence? Does the assessment still support that purpose? Do students have opportunities to show judgement, process, feedback response or disciplinary understanding?
How does assessment move through the institution? Where are marking, moderation, feedback, grade handling, exceptions or release processes creating avoidable friction?
3. Student guidance
What do students need to understand about AI use, acknowledgement, evidence of learning and expectations within their course or discipline?
What do staff need in order to make confident, consistent decisions? Where is guidance enough, and where do people need examples, workshops, templates or ongoing support?
If a tool, workflow or assessment approach is being tested, what does the pilot need to show? What evidence will help the institution make a better decision?
That might mean changing assessment design. It might mean improving guidance. It might mean clarifying workflows, responsibilities and platform use. Often, it means several of these together.
In practice, that often starts with an Assessment & AI Workflow Diagnostic, a closer look at an Assessment Workflow Redesign Sprint, Assessment Design for the AI Era, Assessment Pilot Workflow Support, or Moodle Assessment Workflow Support where platform processes are part of the picture.
A useful question now
Rather than asking only whether AI is good or bad for education, I would start here:

What is AI showing us about how our current assessment and feedback systems are working - and where do we need to respond with more care, clarity, and practicality?
That is where the real work begins.
Why the timing matters
Assessment and feedback already sit at the heart of student experience, academic standards and staff workload. AI adds urgency, but it does not remove the need for careful review, thoughtful assessment design, workflow redesign and practical implementation.
This is why I focus on the point where assessment practice, evidence of learning, digital systems, staff adoption and institutional change meet.
For a more structured review framework, you can also read the practical guide: Assessment and feedback in the AI era.
Related pages
If your institution is rethinking assessment and feedback in the context of AI, I’d be glad to hear more. A short diagnostic is often a useful place to begin when the team needs to understand whether the next step is assessment design, workflow redesign, Moodle support, pilot planning or clearer guidance.




Comments