top of page

Assessment and feedback in the AI era: a practical guide for universities

By Naomi Rowan, Founder & Consultant, Gratitude Worldwide Ltd Published: April 2026 | Last updated: 13 May 2026

Universities are under growing pressure to rethink assessment and feedback in the context of AI, staff workload, student expectations, academic standards, and increasingly complex digital ecosystems.

 

This guide sets out a practical way to approach that work. It is designed for digital education teams, assessment and quality teams, academic development teams, programme leaders, and professional services colleagues working across assessment, platforms, workflows, and implementation.

In brief

AI-era assessment and feedback work is not only about policy, acceptable use or tools. It also touches assessment design, evidence of learning, marking and moderation workflows, Moodle/platform processes, student guidance, staff confidence and adoption.

A useful review looks across six areas:

  • assessment design;

  • feedback practice;

  • marking and moderation workflows;

  • digital systems;

  • staff adoption;

  • the policy-to-practice gap.

The goal is to understand what needs to change first: the assessment itself, the workflow around it, Moodle/platform use, staff guidance, pilot planning, or wider implementation support.

A useful review should consider at least six areas:

What universities need to review

Table 1: Six areas to review in AI-era assessment and feedback

Area
Key question
Assessment design

Are assessment tasks still fit for purpose in the context of AI?

Feedback practice

Is feedback timely, useful, consistent, and manageable?

Marking and moderation workflows

Are roles, handoffs, decisions, and exceptions clear?

Digital systems

Do platforms support the process, or create extra work?

Staff adoption

Do staff have the confidence, guidance, and time to work differently?

Policy-to-practice gap

Does institutional guidance translate into day-to-day assessment practice?

Why this matters now

AI is creating new questions for assessment while also exposing older pressures around workload, feedback quality, marking processes, digital workflows, student guidance and staff confidence.

The sector conversation is moving beyond high-level AI statements and acceptable-use guidance. Universities are now working through more practical questions: what counts as strong evidence of learning, how assessment tasks may need to adapt, how feedback can remain meaningful and manageable, and how tools or platforms sit inside real workflows.

Jisc’s AI assessment work is focused on how AI can support colleges and universities with workload problems around marking and feedback without compromising quality. QAA has also curated generative AI resources to help the sector engage with AI while maintaining academic standards, and Advance HE continues to publish resources and professional development around assessment and feedback in higher education.

The practical challenge is to connect assessment design, policy, workflow, platform use, student agency and staff adoption, rather than treating them as separate conversations.

A practical review framework

The purpose of this framework is practical: to help institutions see where assessment, workflow, guidance, platform use and adoption need to be brought into better alignment.

Use this eight-step structure:

  1. Clarify the assessment and feedback problem.

  2. Map the current workflow.

  3. Identify where staff and students experience friction.

  4. Review how AI changes risk, workload, and expectations.

  5. Check whether policy, process, and platform use align.

  6. Identify quick wins and deeper redesign needs.

  7. Plan staff adoption, guidance, and training.

  8. Turn findings into a 90-day action plan.

Common signs that workflow redesign is needed

  • Marking and moderation processes vary significantly by department.

  • Feedback turnaround depends on local workarounds.

  • Staff are unclear which system or process to use.

  • Assessment policies are clear, but implementation is inconsistent.

  • Moodle or another platform is blamed for problems that are partly process-related.

  • AI guidance exists, but staff are unsure what it means in practice.

  • Students experience inconsistent feedback, submission, or communication processes.

A useful review should consider at least six areas:

From review to action

Table 2: From review to action

If the review shows
The next step may be
The assessment no longer provides strong evidence of learning
Assessment Design for the AI Era
Marking, moderation or feedback workflows are inconsistent or too manual
Assessment Workflow Redesign Sprint
Moodle or platform processes are creating avoidable friction
Moodle Assessment Workflow Support
A tool, workflow or assessment approach needs testing before rollout
Assessment Pilot Workflow Support
Staff need clearer guidance, training or confidence
Staff Adoption, Guidance & Professional Learning
The issue is still unclear
Assessment & AI Workflow Diagnostic

Where this work can lead

​A review of assessment and feedback in the AI era often leads to different kinds of action.

  • Some departments need assessment design support: refining tasks, clarifying AI use, making student judgement more visible, and building feedback or process evidence into the assessment.

  • Some teams need workflow redesign: marking, moderation, Moodle/platform processes, grade handling, guidance and staff adoption.

  • Some institutions need a pilot: a structured test of a tool, workflow or redesigned assessment approach before wider rollout.

The useful first step is to understand which kind of work is actually needed.

Frequently asked questions

Does AI mean every assessment needs to be redesigned?

No. Some essays, exams and existing assessment tasks may still be appropriate. The work is to understand what each assessment is trying to evidence, whether that evidence remains strong in the context of AI, and whether the workflow around the task is workable.

 

What should universities review first?

A useful starting point is to review assessment design, feedback practice, marking and moderation workflows, Moodle/platform use, staff adoption and the policy-to-practice gap together. These areas are highly connected.

 

How does this connect to Moodle or platform workflows?

Assessment design decisions often become workflow decisions. A staged task, portfolio, peer feedback activity, rubric, oral component or AI-use declaration still needs to work inside Moodle or the wider platform ecosystem.

 

What is the difference between assessment design and workflow redesign?

Assessment design focuses on what the task is trying to evidence and how students show learning. Workflow redesign focuses on how the process works in practice: submission, marking, moderation, feedback, grade handling, exceptions, platform use and staff adoption.

 

What is the best starting point?

If the problem is still unclear, an Assessment & AI Workflow Diagnostic can help identify whether the next step is assessment design, workflow redesign, Moodle support, pilot planning, staff adoption or ongoing advisory support.

Gratitude Worldwide Ltd

Company No: SC710192

VAT No: 460894172

naomi@gratitudeworldwide.org

Scoping conversations by Zoom or Teams.

Gratitude Worldwide logo
LinkedIn logo and link

Assessment, feedback and AI-era change for higher education.

Remote across the UK and internationally.

Testimonials

Website last updated: May 2026

bottom of page