top of page

What Universities Get Wrong About AI and Assessment

  • Writer: Naomi Rowan
    Naomi Rowan
  • 2 days ago
  • 3 min read

Updated: 16 hours ago

AI is not the whole problem in higher education. In many universities, it is exposing problems that were already there.


The conversation is often too narrow.


Too much of the conversation about AI in higher education starts in the wrong place.

It starts with fear, policy, or a debate about whether AI should be allowed at all.


Those questions matter, but they are not the whole picture.


In practice, AI is putting pressure on areas that were already under strain: assessment design, marking and feedback workflows, staff confidence, and the gap between written guidance and day-to-day reality.


That is why universities need more than commentary about AI. They need practical work that helps them respond well.


What AI is really exposing


In many institutions, AI has made older issues harder to ignore:

  • assessment formats that were already vulnerable or over-relied on

  • feedback and marking processes that were already inconsistent

  • digital workflows that were already more complex than they needed to be

  • uneven staff confidence in using systems, guidance, and new tools

  • institutional conversations that sit too far from operational reality


Seen in that light, AI is not simply a new threat. It is also a stress test.


It is forcing institutions to look more closely at how assessment and feedback actually work, where friction sits, and what needs to change first.


Why practical change matters more than broad positioning


A high-level AI statement can be useful. So can policy. But neither is enough on its own.


Institutions also need to ask more practical questions:

  • Where are staff getting stuck?

  • Which assessment and feedback processes are creating unnecessary work?

  • Where are local workarounds masking deeper problems?

  • What needs to change in workflow, platform use, guidance, or support?

  • Which forms of assessment and feedback genuinely benefit students?


These are the questions that make meaningful implementation possible.


They also reduce panic, because they move the conversation away from abstract fear and toward practical action.


What a useful response looks like for AI and assessment


A useful institutional response to AI in assessment usually includes five things:

  1. a realistic review of current assessment and feedback practice

  2. careful attention to workflow and process, not just policy wording

  3. staff support that goes beyond one-off training

  4. a willingness to redesign what no longer works well

  5. meaningful student engagement in any redesign


That might mean changing assessment design. It might mean improving guidance. It might mean clarifying workflows, responsibilities, and platform use. Often, it means all three.


In practice, that often starts with an assessment and feedback review, a closer look at assessment workflow redesign, or a review of Moodle assessment workflows where they are part of the picture.



The question worth asking now


The most useful question is not “Is AI good or bad for education?”


It is:

What is AI showing us about how our current assessment and feedback systems are working - and where do we need to respond with more care, clarity, and practicality?

That is where the real work begins.



Related pages


If your institution is rethinking assessment and feedback in the context of AI, I’d be glad to hear more. In many cases, a short diagnostic sprint is the best place to begin.



Comments


bottom of page