top of page
Gratitude Worldwide brand illustration

Proof of practical assessment and digital change

Selected organisations and partnership contexts:

LSE | Nord Anglia Education | Aula | MIT–Nord Anglia | Juilliard–Nord Anglia​

Stat
One-line context
26 departments in scope
Digital assessment and feedback programme at LSE.
8,000+ educators reached
Professional learning across the Nord Anglia network.
11,171 users across 68 schools
Moodle-hosted wellbeing toolkit deployed via Global Campus.
49% improvement in design ratings
QA workflow redesign at Aula raised learning designs meeting the quality benchmark from 8.82% to 54.26%.

From assessment workflow redesign and Moodle process support to professional learning at global scale, my work helps institutions turn complex educational change into something usable.

Start with the right kind of support

Most universities arrive at one of three questions. Each maps to a way of working together.

Assessment workflow review illustration

Assessment & AI Workflow Diagnostic

For universities that need a clear, practical view of what is working, what's not, and where to focus next.

Best when: AI, workload, feedback quality, marking, or platform use are creating uncertainty.

You get: Findings summary, workflow map, friction and risk analysis, and a 90-day plan.

Platform and process redesign illustration

Assessment Workflow Redesign Sprint

For institutions that need to redesign marking, moderation, feedback, Moodle or platform workflows.

Best when: Current processes are inconsistent, workarounds have multiplied, or platform decisions need clearer requirements.

You get: Redesigned workflows, functional requirements, user stories, pilot plan, and decision-support materials.

Staff adoption and training illustration

Fractional Assessment Transformation Partner

For teams that need ongoing senior support across a live programme of assessment, platform, or AI change.

Best when: The work needs continuity, stakeholder alignment, challenge, and practical momentum across months, not weeks.

You get: Advisory support, decision papers, implementation planning, adoption support, and workshops.

I also take on selected work with schools, colleges and education organisations where the challenge connects clearly to assessment, digital change or professional learning.

Make the right work easier to do

Assessment change works when institutions reduce wasted friction, protect useful effort, and strengthen the evidence they ask students to produce.

​

Friction is the time staff lose to unclear handoffs, duplicated checks, manual workarounds, inconsistent guidance, awkward Moodle and platform processes, and decisions that sit in people’s heads rather than in a shared process. It should be reduced wherever possible.

 

Effort is the work that still needs human judgement: assessment design, feedback quality, moderation, academic standards, student communication, and careful decision-making. It should be designed for, supported, and made sustainable.

​

My work helps universities separate the friction worth removing from the effort worth protecting. That can mean refining the assessment itself, redesigning the workflow around it, or planning a pilot that tests whether a tool or process will work in real institutional conditions.

The AI-era assessment question

AI asks universities to look again at what assessment is trying to evidence, how students show judgement, how staff workload is managed, and where technology should reduce friction without weakening trust or academic integrity.

​

Students do not need blanket permission or blanket prohibition. They need assessment-specific guidance, repeated practice in making judgements, and clear ways to show what work is theirs, what support they used, and how they made decisions.

​

Two questions tend to crowd the conversation: “Can/should students use AI?” and “Can we detect it?” Neither is the most useful starting point.

​

A more useful starting point is to ask: what does this assessment need to evidence? Where should student judgement be visible? What support is legitimate? Which decisions must remain human, transparent and accountable?

​

Automate friction, not judgement.

​

The question is not how much assessment work can be automated. It is which work should be automated, which should be augmented, and which must remain a matter of accountable human judgement.

Why this work matters now

Student AI use is already widespread. Staff use is growing quickly. Institutions are under pressure to clarify expectations, redesign assessment, protect academic standards and reduce workload without weakening trust.

​

Three recent data points make the scale concrete:

  • HEPI’s 2026 UK student survey: 95% of undergraduates reported using AI in at least one way, and 94% reported using generative AI for assessed work.

  • EDUCAUSE 2026: 94% of higher education staff and faculty reported using AI tools for work in the previous six months, but only 54% were aware of relevant institutional policies.

  • Jisc’s AI marking and feedback pilots (2025): workload reduction is possible, but human oversight, trust, training and implementation effort remain central to whether it lands.

LSE Logo

At the London School of Economics, I support digital assessment and feedback change with a focus on assessment workflows, Moodle-based processes, platform change, and staff adoption.

​

The work spans workflow redesign, functional and technical requirements, market and options analysis, business case support, pilot planning, training, adoption support, and cross-stakeholder collaboration across academic, professional services and supplier teams.

​

Assessment change is rarely just a platform question. If the process is unclear, a new system will simply automate confusion.

Flagship proof: LSE assessment workflow redesign

Why institutions bring me in

The work sits where academic practice, assessment operations, digital platforms and implementation meet. I help turn that complexity into practical action by working across educators, students, professional services, senior stakeholders and suppliers.

​

About Naomi

A photograph of Naomi Rowan - higher education assessment consultant

I’m Naomi Rowan, founder of Gratitude Worldwide. Over more than fifteen years I have worked across higher education, international education and edtech, including LSE, Nord Anglia Education, Aula, MIT–Nord Anglia and Juilliard–Nord Anglia.

​​

I bring a calm, practical and people-centred approach to complex assessment, feedback and implementation change.

Let’s talk

If you are reviewing assessment and feedback practice, planning platform change, or supporting staff adoption in the AI era, I’d be glad to hear more about your context. A scoping conversation is short, free, and decision-useful even if we don’t go on to work together.

Or email naomi@gratitudeworldwide.org with a paragraph of context. I reply within two working days.

Gratitude Worldwide Ltd

Company No: SC710192

VAT No: 460894172

​

naomi@gratitudeworldwide.org

Scoping conversations by Zoom or Teams.

Gratitude Worldwide logo
LinkedIn logo and link

Assessment, feedback and AI-era change for higher education.

​​

Remote across the UK and internationally.​

Testimonials

Website last updated: May 2026

bottom of page