Performance Framework

Version 1.0

The goal of the Digital Services Factory is to provide well maintained, user centered services that help the user achieve an outcome with minimal effort. In order to measure the level of success of our services, we need to answer questions such as: Is the service working for the users? Is it working well? Are the changes we are making improving the service? What does good looks like? 

The performance framework aims to answer these questions, provide methods to gather and analyse information in order to understand how well the services are working for users, and help us build or improve successful services.

Who is this framework written for?

This framework is for any team developing and managing digital services for the Government of Cyprus. 

Setting Performance Metrics

In order to know if services work well for your users, you need to design metrics that have a clear meaning and collect data that tells you how your service is performing against them.

Metrics and measurements

Metrics are measurements that tell you how well something is performing. Typically they’re expressed as percentages or numbers.

For example, if 2000 people tried to complete your service, that’s a ‘measurement’ of total attempts. If 1000 of them successfully completed your service, that’s a ‘measurement’ of completions. Combine the two and you get a ‘metric’ for how often people using the service manage to complete it. In this example, the completion rate is 50%.

Core metrics

Collecting data about your service allows you to measure its performance. You can use data to make sure:

  • the service is meeting user needs
  • the service allows users to easily complete the task it provides
  • there are enough people using the service to make it cost-efficient
  • people know about the service and are choosing to use it

The following 5 metrics have been identified as relevant to the purpose of the DSF and should be gathered for all services:

  • time for a transaction – how much time it takes for users to make a transaction using your service
  • user satisfaction – what percentage of users are satisfied with their experience of using your service
  • transaction completion rate – what percentage of transactions users successfully complete
  • digital take-up – what percentage of users choose your digital service to complete their task over non-digital channels 
  • service availability – what is the percentage of service uptime and downtime.

More KPIs have been identified as useful and they are described on the Core KPIs Performance page.

Performance metrics per service

As a starting point, you need to collect data for the core metrics KPIs for all services and then define specific metrics that are useful for your service. 

How to set performance metrics for your service in a nutshell:

Performance Framework - Purpose

The process described below is indicative and should not be followed blindly, it should be refined on the service and the situation at hand. 

1. Purpose

In order to understand if a service works well, we must first understand its purpose. Start designing your metrics by clearly defining the purpose of your service from the user perspective.

You should consider:

  • who the users of your service are
  • the user needs your service is designed to meet 
  • any policy outcomes your service is intended to produce, for example, a reduction in fraud, response time, or costs
  • core purposes for all services.

The desired outcome would be a sum up of why your service exists in a single line

Ideally for User needs use the format:

I need/want/expect to… [what does the user want to do?]

So that… [why does the user want to do this?]

If it’s helpful, you can add:

As a… [which type of user has this need?]

When… [what triggers the user’s need?]

Because… [is the user constrained by any circumstances?]

For example:

“Social Security Benefits service will make it easier and more efficient for the public to make benefits claims, saving time, including more people (such as people who are abroad)  and enabling more access to benefits”

“Ariadni Payments will make it easier and more efficient for government to process payments, saving time and effort for service teams across government.”

2. Benefits

Define a small number of goals or benefits for your service based on the purpose. 

Definition of benefit: A benefit should be a short description of a certain outcome that will contribute to your service fulfilling its purpose. 

Base your benefits on your understanding of your users’ needs. There might be cases where the benefits are defined within the purpose of the service. 


Need: “as a citizen abroad, I need quick access to money for healthcare so that I can get medical treatment on time.”

Service benefit: “Make it easier to get timely medical treatment abroad by speeding up the application process for funding.”

Hypotheses based on benefits

Articulate how something you are going to do will help you achieve one of your service’s benefits and why.

Ideally use the format:

“Doing/changing [something]…”

“Will [benefit]…”

“Because [assumption/logic/strategy]…”

Aim to get to the core assumptions about how your service will add value. This will help you make your measurements as relevant as possible.

For example:

If we provide an online form, it will speed up the application process because digital applications are quicker than paper ones.

Hypotheses will help you make better decisions with data because they let you measure the effect of a specific course of action you’ve chosen to follow.

3. Measures

Your hypotheses should take you to a point where you have a set of things to measure that is easily measurable. At the start of developing your service, you do not have to think about specifically how you’ll measure them, but you should have a clear understanding of what each measurement means.

For example:

We will measure how long it takes a user to make an application using the online form we build.

As your service evolves, you’ll need to identify which metrics will be the KPIs you share to show how your service is performing. Choose 3 or 4 metrics that would answer the question: ‘is this service working?’

Consider any pain points for the users that will increase the impact of the service. 

You’ll still probably need other metrics within your team to understand your service’s performance in more detail.

4. Data 

After you’ve chosen what to measure, you’ll need data for your measurements. 


You will need to get access to a variety of data sources to gather data. Data sources could include:

  • digital (web) analytics
  • surveys and user feedback
  • site performance
  • service desks, call centres, and user support
  • back-end systems, for example, monitoring systems for response times and availability
  • financial information (for example, how much your hosting costs)

Digital analytics will tell you how people are interacting with the online parts of your service. You should make sure:

  • you choose a digital analytics tool that will give you what your service needs
  • your performance analyst is involved in building your service – it’s much easier to implement data collection and solve problems as you build than to go back and do it afterward

Do not just use digital analytics. Combining data from a range of sources will give you a richer picture of how your service is performing.

There are different ways to access each data source and you’ll be able to use different tools to do so. Your performance analyst can help you consider practical questions such as:

  • how you’ll gather user feedback – for example, you might need to run a user satisfaction survey or collect data from your call centres
  • what software you’ll need to install if you’re going to monitor service uptime and performance

Consider how to communicate data between the data sources to the analytics tool.

Establish a baseline

Establish a ‘baseline’ based on current performance trends by channel, against which changes to the service will be judged. This will help you pinpoint the effect of your initiatives, and identify what worked. 

You can ‘user research’ to establish baseline data. For example

  • make recordings of the time taken for a user to complete the current ‘as is’ service in comparison to the ‘new’ version
  • ask participants to rate their ‘as is’ experience.

Collect data

Once you’ve identified all your data sources, your team’s performance analyst will:

  • make sure your data is collected accurately
  • check for statistical significance
  • test for errors or inconsistencies in the data
  • do appropriate analysis

Try to automate the collection of data as best as possible.

5. Analyse

Analysis of data needs to be timely and help you take action. Effective analysis should tell you:

  • what happened
  • why it happened
  • what you need to do about it

Also see the section on How to use your data.

Give context to your measurements

To be useful, metrics need to have context. You should provide context for your metrics by:

  • using segments in your analysis
  • measuring your service’s performance over time
  • comparing your service’s performance with that of similar services

Segmenting data

Make sure you use segments in your analysis. Segments group your users, for example by:

  • device type – whether they used a phone or a desktop computer to access your service
  • new or repeat user – whether they had used your service before

Using segments correctly will help you identify and understand trends because they help you understand how your users are different. For example, new users may behave very differently from returning users, making an overall average of user behaviour inaccurate.

You can also compare different user groups such as:

  • carers and those being cared for
  • ordinary citizens with professional intermediaries (like solicitors). 

If you break things down by group you might find something interesting that will be masked by aggregating data.

Measuring performance over time

Establish a ‘baseline’ of how your service performs currently, across all channels. Judge changes to the service’s performance against this.

If you’re building a new service, you can use the performance of existing or legacy systems to inform your baseline.

Measure performance continuously as much as possible, rather than taking ‘snapshots’. This will give you a ‘trend line’ against which peaks and dips in performance can be measured, making it easier to see the effects of:

  • changes to the service
  • communications initiatives
  • seasonal variation

Set targets for what you expect each KPI to be at a specific time in the future if your plans to improve your service are successful. For example, in 6 months’ time, you might expect to see a 10% reduction in the time it takes for a user to make an application because of your work to simplify the process.

Comparing performance with similar services

There are likely to be services similar to yours in terms of:

  • number of users
  • complexity
  • type of users
  • type of service
  • frequency users go through the process

Comparing your service’s performance with other, similar services will let you know whether your service is performing significantly better or worse than expected.

6. Visualise

Presenting data visually in a prominent place or message is a good way to maximise the likelihood that the information will be acted upon and improvements made to your service.

Use dashboards, reports, alerts, and visualisations to share findings.

Dashboards are objective-focused and will help inform decisions, often with the help of real-time data. Read about building a data dashboard for a service delivery team [

Alerts are used to inform the users about a change or an event, often using attention-grabbing delivery mechanisms.

Reports provide regular, scheduled snapshots of data and tend to require extra context and time to digest.

You should try to automate the creation of standard reports, for example automating email alerts for ‘trending’ content so your analyst can spend their time on more valuable work.

7. Monitor, iterate and improve

Iterate your metrics as your service changes – you’ll need to know different things as your service changes over time.

How to use your data

It’s important to use the data you’ve collected to find ways to improve your service and prioritise them.

For example, your completion rate data can help you identify the stages in your user journeys when users are dropping out. Work with your user researcher and plan a round of user research to try to understand why.

You can also segment your user data based on the characteristics of groups of users. For example, you could check if users who are a certain age find the service harder to complete than other users.

If this is the case, you could then work with your user researcher to plan a round of user research with those users.

Best Practices

  • have a performance analyst on your team
  • work out your measurements as you build the service, not afterwards
  • estimate the number of people you expect to use the service – be aware that large numbers may mean you need powerful analytics tools
  • find out the analytics tools your organisation already has and whether they’re suitable for the type and volume of data you’re expecting
  • find out where all your existing data is kept and how you’re going to access it, aggregate it, and make it usable so that you can measure your service’s KPIs
  • start thinking about the different ways users will interact with your service so you can plan how you’ll gather data
  • combine your metrics with user research 
  • communicate the performance of the services to the users and the organization.

Improve the framework

The framework should be reviewed and amended regularly. You should review the framework at least after the implementation of each service and when problems are identified. You should check:

  • how the framework has performed, 
  • how processes could improve
  • are stakeholders informed about the success of the service
  • are data gathered automations working properly
  • are dashboards, reports, and alerts working properly
  • any policy changes that could influence the framework, i.e. changing who is responsible for metrics

Print Friendly, PDF & Email