← Home
Measuring the health of a service
Evaluating the extent to which user needs are met across UK Research and Innovation's funding service.
2024
Problem
When I joined UKRI, the organisation had few ways to answer the following questions:
- How is the service doing for users?
- What parts of the service are working for users?
- What parts are not working?

This problem led to the user's voice being less prominent in product roadmapping discussions.
The root of the problem
This was a problem because the service that UKRI’s funding service provides is complex and long. UKRI strategises areas to invest in, puts out calls for applications, evaluates thousands of applications from universities and businesses by leveraging experts around the UK, selects winning bids, and administers funds over years to the winning organisations. This isn’t a service for applying for a new driver’s license. Consequently, my first step was to map out the service and try to break it down into smaller, simpler steps.

Additionally, our open-ended feedback forms were outdated and un-targeted. They asked only for qualitative data that we didn’t have the resources to analyse completely or understand what part of the service the feedback was coming from.

Here, I will illustrate my work not using UKRI data, but rather the example I made up of renting a library book.
Approach

My responsibility as the new permanent senior user researcher was to own this piece of work to evaluate the “health of the service”. I broke the service up into 10 discrete mini-services with the help of service design partners.

I kept thinking about my recent application for an American Express card. My application felt like a single experience with a distinct start and end, with a request for feedback to cap it off. Conducting some other secondary research led me to take an approach of defining "service outcomes" for the stages of the service, treating them as hypotheses, and sending targeted surveys once the service stage ended with appropriate users to measure them.

I did a complete makeover of our feedback mechanisms. By using targeted surveys, we can now better quantify the user experience and understand what stage of the service the feedback relates to.
Step 1: Defining service outcomes

Credit here goes to my service design partner. With a holistic view of the service and more time working on the service under his belt, I leaned on him to start to craft service outcomes. Service outcomes to me are short goals from both the user side and the business to strive toward.

Service outcomes are answers to the question: “If this part of the service is successful, what should the user be able to do? How should they feel?”
Step 2: Measuring service outcomes
By creating concise service outcomes that had a single concept to measure, it was easy to then craft targeted surveys in MS Forms with user satisfaction questions, demographics, and likert-scale questions to give us a quick, numerical sense of the extent to which we are meeting service outcomes. Working with the engagement team, I created a survey email campaign calendar for them to follow to ensure we send feedback surveys at the right time to the right users.
Step 3: Visualising the data
I started visualising the data in Miro. Knowing it wasn’t where it would live forever, Miro was the perfect tool to test the concept, present the data, and get buy in from stakeholders.

Once a month, I host the “service health review” meeting with senior leaders where we discuss recent data from the last month and walk through the status of service outcomes (red/amber/green). We discuss outcomes that are flagged as “red” (underperforming) and discuss next steps. The service outcomes are weighted based on priority from our service and product owners.

Now, my data analyst partner and I are designing a dashboard in Tableau for senior leaders to have on hand.
Steps infinity: Socialising and getting buy-in
This step happens at all times and doesn't stop. I started socialising this work in the first months at work, presenting my thinking at all-hands meetings. Constantly getting feedback from SLT to ensure they were bought in and believed in what I was building.

For example, after running through service outcomes during a monthly service health review, I could tell there was some hesitancy toward the service outcomes. After that, I did a workshop with the product team to refine the outcomes and improve them. If they don’t believe in the outcomes, the measurements I play back to them will fall on deaf ears.
Outcome
It is still early days and the product roadmap has been defined until next Spring. I have worked on this effort for about 7 months as I write this in August 2024. The heavy lifting is almost complete. People are bought in. People are keen to hear about the service during our monthly review meetings. Targeted surveys are regularly launched, gathering holistic quantitative data about our user for the first time. Senior leaders commissioned myself and my data analyst partner to design the information in Tableau for regular senior leadership reviews.

While this “service health” project is still maturing, I have complete faith it has the legs it needs to influence the roadmap.