

Do it with care. Do it together. Make it workable.
Practical, agency-centred assessment, feedback and workflow support for universities in the AI era.
I help universities make assessment, feedback and LMS workflows practical in the AI era: reducing avoidable friction while protecting evidence of learning, student agency, academic judgement and staff capacity.
Proof of practical assessment and digital change
Selected organisations and partnership contexts include LSE, Nord Anglia Education, Aula, MIT–Nord Anglia and Juilliard–Nord Anglia.
26 departments in scope at LSE
8,000+ educators reached through professional learning
11,171 users across 68 schools
49% improvement in instructional design ratings
From assessment workflow redesign and Moodle process support to professional learning at global scale, my work helps institutions turn complex educational change into something usable.
Start with the right kind of support
Assessment & AI Workflow Diagnostic
For universities that need a clear, practical view of what is working, what is not, and where to focus next.
Best when: AI, workload, feedback quality, marking processes or platform use are creating uncertainty.
Typical output: findings summary, workflow map, friction/risk analysis and 90-day priorities.
Assessment Workflow Redesign Sprint
For institutions that need to redesign marking, moderation, feedback, Moodle or platform workflows.
Best when: current processes are inconsistent, workarounds have multiplied, or platform decisions need clearer requirements.
Typical output: redesigned workflows, functional requirements, user stories, pilot plan and decision-support materials.
Fractional Assessment Transformation Partner
For teams that need ongoing senior support across a live programme of assessment, platform, AI or implementation change.
Best when: the work needs continuity, stakeholder alignment, challenge, coordination and practical momentum over time.
Typical output: advisory support, decision papers, implementation planning, adoption support, workshops and programme guidance.
I also take on selected work with schools, colleges, education organisations and partners where the challenge connects clearly to assessment, digital change, or professional learning.
Make the right work easier to do
Assessment change works when institutions reduce wasted friction, protect useful effort, and strengthen the quality of the evidence they ask students to produce.
That means reducing the friction that wastes staff time: unclear handoffs, duplicated checks, manual workarounds, inconsistent guidance, awkward Moodle/platform processes and decisions that sit in people’s heads rather than in a shared process.
It also means protecting the work that still needs human judgement: assessment design, feedback quality, moderation, academic standards, student communication and careful decision-making.
My work helps universities separate the friction that should be removed from the effort that needs to stay. That can mean refining the assessment itself, redesigning the workflow around it, or planning a pilot that tests whether a tool or process will work in real institutional conditions.
The AI-era assessment question
AI asks universities to look again at what assessment is trying to evidence, how students show judgement, how staff workload is managed, and where technology should reduce friction without weakening trust or academic integrity.
Students do not need blanket permission or blanket prohibition. They need assessment-specific guidance, repeated practice in making judgements, and clear ways to show what work is theirs, what support they used, and how they made decisions.
The useful question is not simply: can students use AI?
Not only: can we detect it?
The better question is: what does this assessment need to evidence, where should student judgement be visible, what support is legitimate, and which decisions must remain human, transparent and accountable?
Automate friction, not judgement.
To support staff and students alike, the focus should not be how much assessment work can be automated, but which work should be automated, which should be augmented, and which must remain a matter of accountable human judgement.
Why this work matters now
Student AI use is already widespread. Staff use is growing quickly. Institutions are under pressure to clarify expectations, redesign assessment, protect academic standards and reduce workload without weakening trust.
-
In HEPI’s 2026 UK student survey, 95% of undergraduates reported using AI in at least one way, and 94% reported using generative AI for assessed work.
-
In a 2026 EDUCAUSE survey, 94% of higher education staff and faculty respondents reported using AI tools for work in the previous six months, but only 54% were aware of relevant institutional policies or guidelines.
-
Jisc’s AI marking and feedback pilots suggest that workload reduction is possible, but human oversight, trust, training and implementation effort remain central.

At the London School of Economics, I support digital assessment and feedback change with a focus on assessment workflows, Moodle-based processes, platform change and staff adoption.
The work includes assessment workflow redesign, functional and technical requirements, market and options work, business case support, pilot planning, training, adoption support and cross-stakeholder collaboration across academic, professional services and supplier contexts.
Assessment change is rarely just a platform question. If the process is unclear, a new system can simply automate confusion.
Flagship proof: LSE assessment workflow redesign
Why institutions bring me in
My work sits at the intersection of academic practice, assessment operations, digital platforms and implementation. I help turn complexity into practical action by working across educators, professional services, senior stakeholders and suppliers.
-
If you need clarity before spending money → Assessment & AI Workflow Diagnostic
-
If you know the process needs redesign → Assessment Workflow Redesign Sprint
-
If you’re preparing to pilot or procure a tool → Pilot design and evaluation support
-
If you're preparing departments to adapt for change → Staff Adoption, Guidance and Training
-
If the work is already live and needs continuity → Fractional Assessment Transformation Partner
About Naomi

I’m Naomi Rowan, founder of Gratitude Worldwide.
Over more than 15 years, I’ve worked across higher education, international education and edtech contexts including LSE, Nord Anglia Education, Aula, MIT–Nord Anglia and Juilliard–Nord Anglia.
I bring a calm, practical and people-centred approach to complex assessment, feedback, digital workflow and implementation change.


