Performance Metrics
09/06/22 | A. Understanding User Needs | All Guidelines and Documentation | C. Choosing the right technologyVersion 1.0
This document describes the KPIs scopes, the 5 core metrics that should be gathered by all services, and additional metrics that can be optionally measured.
KPIs Scopes
KPIs can be scoped in the following categories.
Team Performance
Within this scope, data is gathered to work out if the team performs as defined by the mandate. These KPIs will try to answer the question “Is the team performing well”.
System Performance
Within this scope, data is gathered to work out if the system put in place to support the DSF generated services, performs as defined by the mandate. These KPIs will try to answer the question “Is the system performing well”.
In this context, the system is the set of software that are developed or acquired and used to operate the services based on DSF standards.
Service Performance
Within this scope, data is gathered to work out if individual services perform as defined by the mandate. These KPIs will try to answer the question “Is the service good”.
Core metrics
The following 5 metrics have been identified and should be gathered for all services:
1. Time for transaction
Measure the time it takes for a user to make a transaction (i.e. time it takes from starting a service until the submission of an application).
Data Set
<strong>time for transaction</strong>: average time it takes for a user to make a transaction (i.e. submit an application for a service).
Calculation method
For each transaction, time for transaction = (time transaction ended – time transaction has started)
You must make the following definitions for each service in order to calculate this KPI:
- What is considered a transaction in your service
For example: A transaction is a submission of the Child Birth Grant Application
- When a transaction is considered as staring
For example: A transaction starts whenever a user enters a service page after login and can apply for the service
- When a transaction is considered as ended
For example: A transaction ends when a submission is made and the user is displayed a reference number
You should also consider other factors such as:
- Exceptions to the above definitions
For example: Do not mark a transaction as started if:
- If a user returned to the start page with the same session
- If a user cannot continue with the application because the CY Login user is not authenticated
- If the user cannot continue because she has already submitted
- What happens when a user returns to the service with a different session
For example: If there is a save draft functionality, we should use the same transaction id every time the user comes back until the application has been submitted.
- Consider what happens if your service provides save application for later
For example: If the user saves the application and returns at a later stage, do not mark a transaction as started again.
Scope:
System and Service Performance.
2. User satisfaction
Measure if users are satisfied with their experience of using the services.
Data Set
Use data from the feedback software that will be in place. We want to be able to review the following data for a defined time period:
- very dissatisfied: count of number of feedback stated “very dissatisfied” during the set period
- dissatisfied: count of number of feedback stated “dissatisfied” during the set period
- neither satisfied nor dissatisfied: count of number of feedback stated “neither satisfied nor dissatisfied” during the set period
- satisfied: count of number of feedback stated “satisfied” during the set period
- very satisfied: count of number of feedback stated “very satisfied” during the set period
These results should be filtered by service and/or stage where by stage we mean at what point did the user give feedback like “on submission, on drop out or general feedback”
Getting feedback
A standard feedback page can be used to gather the user’s feedback. The services should allow users to give feedback about your service at various stages of using it like:
- At the end of your online service
- At any point
- When users drop out
The feedback page could have the following format:
- User satisfaction – very dissatisfied
- User satisfaction – dissatisfied
- User satisfaction – neither satisfied nor dissatisfied
- User satisfaction – satisfied
- User satisfaction – very satisfied
You should also allow the users to enter in a text box suggestions on how we can improve our service.
Calculation Method
Create an overall percentage satisfaction score from the ratings results, including how many users have submitted feedback, using the formula below.
<strong>user satisfaction </strong>= [(v dis * 0) + (dis * 25) + (neither * 50) + (sat * 75) + (v sat * 100)] / total ratings
Where:
- ‘v dis’ is the count of very dissatisfied ratings
- ‘dis’ is then count of dissatisfied ratings
- ‘neither’ is the count of neither satisfied nor dissatisfied ratings
- ‘sat’ is the count of satisfied ratings
- ‘v sat’ is the count of very satisfied ratings
- ‘total ratings’ is the count of all ratings
Scope:
System and Service Performance.
3. Transaction completion rate
Measure how many transactions were started for a service and compare them with how many were completed (i.e. how many birth grant benefit applications have started and how many have been submitted).
Data Set
The following data are needed:
- transactions ended: count of the number of transactions ended through the digital service over a set period. (i.e. how many applications have been submitted through the online service)
- transactions started: count of the number of transactions started over a set period (i.e. how many applications have started).
In order to calculate this KPI, you must make the same definitions made for “time for transaction” KPI, regarding when a transaction starts and ends.
Calculation method
<strong>transaction completion rate</strong> = (transactions ended ÷ transactions started) * 100.
Scope:
Service Performance.
4. Digital service take-up
Measure how many transactions were made from the digital service versus how many were made from non-digital channels (i.e. submit an application for a service) over a fixed period of time. In this context “digital service” is the service you are creating.
A comparison can also be made with “estimated digital transactions” whenever such information exists.
Data Set
The following data are needed:
- digital transactions: count of the number of transactions ended through the digital service over a set period. (i.e. how many applications have been submitted through the online service). Note this is the same metric as “transactions ended” defined in the “transaction completion rate” KPI
- all transactions: count of the total number of transactions from all channels over a set period.
Calculation method
<strong>digital service take-up</strong>= (digital transactions ÷ all transactions) * 100.
Scope:
System and Service Performance.
5. Service availability
Measure service availability in terms of uptime and downtime.
Data Source and update method
The data can be gathered from an uptime test service.
Calculation method
<strong>Availability</strong> = (Uptime ÷ (Uptime + downtime) ) * 100
NOTE: If a digital service is available only at certain times of the day, time between intended hours of operation are not considered downtime.
Example: Let’s say you’re trying to calculate the availability of a service. That service ran for 200 hours in a single month. That service also had two hours of unplanned downtime because of a breakdown, and eight hours of downtime for weekly programmed maintenance. That equals 10 hours of total downtime.
Here is how to calculate the availability of that asset:
Availability = 200 ÷ (200 + 10)
Availability = 200 ÷ 210
Availability = 0.952
Availability = 95.2%
Scope:
System and Service Performance.
Additional Metrics
The following additional metrics have been identified and can be optionally measured.
Change fail percentage
The percentage of deployments causing a failure in production over a period of time.
Data Source and update method
The data can be gathered from the deployment service.
Calculation method
<strong>change fail percentage</strong> = (failed deployments ÷ total deployments) * 100.
Scope
Team Performance.
Time to recovery
How long it takes to recover from a failure in production.
NOTE: you need to know when the incident was created and when it was resolved..
Data Source and update method
The data can be gathered from the incident management and deployment service.
Calculation method
Average time elapsed between a product or system failure to recovery
Scope
Team Performance.
Number of teams/people working in an agile way
An indicator of how many teams and people working in an agile way (with multidisciplinary team members) including suppliers, over time.
Data Set
The following data are needed:
- date: the first day of the corresponding period
- number of people: the data must be in number format
- number of teams: the data must be in number format
Update method
Data can be manually updated by the Product manager Manager of the service.
Calculation method
- Display the number of people over any fixed period.
- Display the number of teams over any fixed period.
Scope
Team Performance.
Number of deployments
Measure the number of deployments over time.
Data Set
Deployments should be kept in a table format with at least the following columns.
- service name: in text format
- deployment date: in a date format. Deployment date is considered the day that the service is live for the public.
- deployment interval: in number format.
If this data set is kept on a google sheet, you only need to manually update for the first deployment, the rest is updated automatically by a formula. Deployment interval is the interval in days between deployments (see “time for deployment” KPI). Formula (on cell C3): “=ArrayFormula(if(len(A3:A);B3:B-B2:B;""))
”
- Accessibility: in number format from 0 to 100. Represents the accessibility WCAG score for each service (see “service accessibility” KPI).
Additional columns for each deployment can be kept for storing more information if needed
Update method
Data can be manually updated by the Delivery Manager of the service.
Calculation method
Count the number of deployments over any fixed period.
Scope
Team Performance.
Time for deployment
Measure how long it takes for DSF to deploy a service.
Data Set
Uses the same data set used for the “number of deployments” KPI.
Update Method
There is only the need to update the “deployment interval” of the first deployment, a formula will take care of the rest.
Calculation method
Average of the “deployment interval” over any fixed period.
Scope:
Team Performance.
Service accessibility
Measure conformity with WCAG 2.0 Level AA.
Data Set
Uses the same data set used for the “number of deployments” KPI.
Calculation method
Average of the “accessibility” over fixed period of time, or per service
Update method
Data can be manually updated by the Delivery Manager of the service.
Scope:
Service Performance, System Performance.
Broken links report
Track repeated repeated broken links.
Data Set
The data can be gathered from an analytics service.
Calculation method
Display the top repeating broken links
Scope:
System Performance.
Page load time
Measure the time it takes to load a page/service.
Data Source and update method
The data can be gathered from a web analytics service.
Calculation method
Average page load time over a set period.
Scope:
System and Service Performance.
Service completion time
Measure the time it takes for each service to be completed from the time it started through your service, until it is completed. This measures completion of a service end to end, i.e. measure the time it takes for the time of application until the Child Birth Grant is granted.
NOTE: This KPI should only be used when there is a clear definition of “service completed” and data can be extracted to indicate the time of completion.
Calculation method
display the “service completion time”
Scope:
Service Performance.
Service success rate
Measure the success rate (i.e. how many applications have been approved) each service started through the DSF service.
NOTE: This KPI should only be used when there is a clear definition of “service successful” and data can be extracted to indicate this.
Calculation method
<strong>service success rate</strong> = (successful service ÷ digital transactions) * 100.
Scope:
Service Performance.