Skip to main content

Testing Process for a Digital Service

27/05/25  |  A. Understanding User Needs  |  All Guidelines and Documentation  |  B. Leading Agile Teams  |  C. Choosing the right technology

Version 1.0

Functional and Non-Functional Testing Guide

Version control table

VersionDateComments
1.027/5/2025Published Document

1. Introduction

Testing helps ensure your service works well, is secure, and is accessible.

There are two main types of testing:

  • Functional Testing
  • Non-Functional Testing

This guide offers a simple approach for testing a digital service. It covers functional and non-functional testing, outlines key steps, and includes practical tips to help teams set up, run, and document tests effectively.

Following these steps will help your team catch issues early and deliver a reliable service that works well for users.

2. Why Testing Matters

Testing helps you:

  • Catch bugs early before they affect users
  • Save time and cost on fixes later
  • Make services smoother and easier to use
  • Meet security, accessibility, and quality standards

Every test you run now means fewer complaints, issues, and rework after launch.

3. Functional Testing

Functional testing ensures that a digital service works as expected. It checks whether each feature does what it’s supposed to do, helping to catch and fix issues before users encounter them.

In addition to testing the service against predefined scenarios, try to intentionally break the flow, enter invalid data, skip required steps, or trigger edge cases. This helps make sure that all validations are working properly and that the service handles unexpected behavior in a clear and user-friendly way.

3.1 Setting Up Functional Testing

Before testing, make sure you understand the service goals, user needs, and business rules from the Discovery and Design phases. You will need to:

  • Identify the different flows of the service (user journeys)
  • Write test scenarios
  • Identify and prepare test users
  • Use a staging environment

3.1.1 Define Test Scenarios

  • List key features to test (CyLogin, CyNotify, file upload, API data calls, application submission, payments, etc).
  • Consider different user actions and possible edge cases (e.g., what happens if a user enters invalid data?).
  • Use simple Test Scenario formats to test the different user journeys of the service:

Example:

Test ScenarioExpected Result
User submits a form with valid dataForm is submitted successfully, and a confirmation message appears
User submits a form with missing required fieldsThe system shows an error message highlighting the missing fields

3.1.2 Prepare a Testing Environment

  • Use the Staging/Testing environment of the service (not the production)
  • Create test users to cover all possible scenarios on the CY Login Test Environment. For guidance, contact cds-support@dits.dmrid.gov.cy
  • Use realistic (but fake) data
  • Match the staging environment to the production environment (as much as possible).

3.2 How to Run Tests and Report Issues

3.2.1 Execute Test Scenarios

  • Follow the Test Scenarios one by one and record the results.
  • Report on the results of all tests. For failed tests, include a Failed Issue ID.

Example Test Execution Report:

Test ScenarioTest userExpected ResultActual ResultPass/FailFailed Issue IDNotes
User submits a form with valid datacitizen25Form submitted successfullyForm submitted successfullyPass
User submits a form with missing fieldscitizen32Error message appearsNo error messageFailBUG-003Validation missing on “Email” field

Download a sample test report

3.2.2 Log Issues and Fix Bugs

  • Report failed tests to the relevant team member, including screenshots, test user details, steps to reproduce, the tested scenario, and its reference number (if available). We recommend using the Issues section of your team’s GitHub repository to log and track these reports
  • After the issue is fixed, run the test again to confirm it’s fully resolved

Example Issue/Bug Report if GitHub Issues is not used:

Issue IDDateTesterTest UserSteps to ReproduceExpected ResultActual ResultSeveritySuggested FixAssigned toStatus
BUG-00125/02/2025Christina Papadopouloucitizen231. Login using test user
2. Open the form
3. Leave ‘Email’ and ‘Phone Number’ empty
4. Click ‘Submit’
Error message highlights missing fieldsNo error message shown; user doesn’t know why submission failedHighAdd validation and error messages for required fieldsFix in progress
BUG-00226/02/2025Andreas Andreoucitizen451. Login using test user
2. Fill out form with valid data
3. Correct a mistake in ‘Date of Birth’ field
4. Try to submit again
Submit button becomes active after correcting the errorSubmit button stays disabled even after fixing the fieldMediumEnable the submit button once all validation errors are clearedReported

Download a sample issue report

4. Non-Functional Testing

Non-functional testing checks how well a digital service performs beyond just its features. It covers areas like speed, reliability, scalability, and overall user experience. This includes:

  • Performance Testing (by tracking key performance indicators),
  • Load and Stress Testing,
  • Security Testing (Penetration Testing), and
  • Accessibility Testing.

These tests help ensure the service is stable, secure, and works well under different conditions.

4.1 Performance Testing

Measure how fast, stable, and available the service is.

The Performance Lead is responsible for monitoring and evaluating the service’s performance, availability, responsiveness, and reliability. Regular and accurate tracking helps ensure the service meets user needs and aligns with government standards

The primary focus of this work is the measurement of the five core Key Performance Indicators (KPIs) defined in the Performance Framework. The Performance Lead must ensure that the necessary data is being collected to accurately measure these:

  • Time for a transaction: How long does it take for users to make a transaction using the service?
  • User satisfaction: What percentage of users are satisfied with their experience using the service?
  • Transaction completion rate: What percentage of transactions do users complete?
  • Digital take-up: What percentage of users choose the digital service to complete their task over non-digital channels?
  • Service availability: What is the percentage of service uptime and downtime?

In addition to the core KPIs, teams should regularly monitor supporting performance metrics to get a more detailed view of service behavior. These include:

  • Page load times
  • Response times
  • Failure rates
  • Uptime percentages
  • Error frequencies

To support performance monitoring, DSF uses a range of tools and data sources:

  •  Matomo (web analytics tool) – for monitoring user experience,
  •  Pingdom – for tracking service availability,
  •  Feedback page – for capturing user satisfaction,
  • APIs statistics –for analyzing trigger events.  

 Note: The use of Matomo on-premise is mandatory. It has been selected as the horizontal solution for managing web analytics for government digital services, developed according to the Service Standard and hosted on Gov.Cy. Pingdom and Power BI are powerful monitoring and analytics tools used in DSF to track and visualize performance metrics of digital services. Similar tools, though, can be used to achieve the same monitoring and analysis tasks.

Visual dashboards should be used to track and interpret this data. Regular reviews of both core and supporting KPIs are essential for identifying issues and driving performance improvements.

4.1.1 Documenting Test Results

4.1.1.1 Key Performance Indicators (KPIs) Sample Report

Key Performance Indicators (KPIs) Sample Report image

4.1.1.2 Matomo Sample Performance Metrics

Matomo Visits overview image
Matomo evolution of page performance metrics image

4.1.1.3 Uptime Sample Report

Uptime report image

4.2 Load and Stress Testing

Simulate traffic to check how the service performs under pressure.

DevOps and Infrastructure teams are responsible for setting up and maintaining load and stress test tools, such as Apache JMeter, to ensure the service can handle expected and unexpected levels of user activity, measure response times, and support capacity planning. Performance is further validated through Load and Stress Testing of both Front-end and Back-end APIs.

Front-end Testing ensures that user interfaces remain responsive under typical and peak loads, while Back-end Testing assesses the system’s resilience, latency, and processing efficiency. These tests help uncover breaking points and support the development of fallback strategies.

4.2.1 Documenting Test Results

4.2.1.1 Sample Test Report

Include screenshots or graphs from your load testing tool that show results for both front-end and back-end components. These results should demonstrate how the service behaves under realistic traffic and stress conditions.

The following metrics are typically recorded during load and stress testing:

MetricDescription
Response time (avg/min/max)Time taken to complete a request
ThroughputNumber of requests handled per second
Concurrent usersNumber of users active during the test
CPU usage under loadSystem resource usage during peak activity
Memory usageMemory consumption during test
Error ratePercentage of failed or timed-out requests

These metrics help identify performance bottlenecks, capacity limits, and areas that may need optimisation.

Tip: Save your test results in a structured format (e.g. screenshots, CSV export, or a visual dashboard) and include them in your final test report.

Sample: Summary Report Table – 200 users in 60 seconds

Summary Report Table 200 users in 60 seconds

Sample: Response time Front-end 200 users in 60 seconds

Response time Front-end 200 users in 60 seconds

Sample: 200 users submitted 580 characters via API in 60 seconds

200 users submitted 580 characters via API in 60 seconds

Sample: 200 users retrieved data from the API in 60 seconds

200 users retrieved data from the API in 60 seconds

Note: Adjust load test figures based on your service’s expected usage and performance goals.

4.3 Security Testing (Penetration Testing)

Security testing helps identify and fix vulnerabilities before a service goes live. An independently certified penetration tester must carry out a Web Application Security Audit (WASA) in line with the relevant policy.

To comply with the WASA policy:

  • Use secure coding practices
  • Address known risks (e.g. OWASP Top 10)
  • Run regular audits and security reviews

Penetration testing simulates real-world attacks to check how the service holds up. All findings, risks, and fixes must be documented before launch.

4.3.1 Documenting Test Results

Submit the final WASA report confirming the service passed with no security issues. If the report contains sensitive information, anonymise it before sharing. The report should include:

  • Date of the assessment
  • Scope of the penetration test
  • Key findings
  • Actions taken to fix the issues

4.4 Accessibility Testing

Accessibility testing checks that the service works for all users, including those with disabilities. It helps identify issues that might affect users who rely on assistive technologies or need specific design considerations.

Your service must meet at least WCAG 2.1 AA standards. Test key areas such as:

  • Keyboard-only navigation
  • Screen reader support
  • Colour contrast
  • Text alternatives for images and icons

Use the methods and checklist provided in the Unified Design System (UDS) under the Accessibility Statement Pattern – Test your service or site for accessibility. This will help you run both manual and tool-based checks and track any issues that need to be fixed.

4.4.1 Documenting Test Results

4.4.1.1 Sample Test Report

Page URL / NameAxe DevTools
(Reporting tool)
Keyboard NavScreen ReaderVoice ControlZoom/MagnifierContrast ModeVisual impaired user
(user research)
Issues FoundNotes
/startTime taken to complete a request Pass symbol Pass symbol Pass symbol Pass symbol Pass symbol Fails accessibility check symbol 0User research not yet conducted.
/your-detailsNumber of requests handled per second Pass symbol Pass symbol Pass symbol Issue found or partially working symbol Pass symbol Fails accessibility check symbol 3Zoom at 200% overlaps label and field.
/add-childNumber of users active during the test Pass symbol Pass symbol Issue found or partially working symbol Pass symbol Pass symbol Fails accessibility check symbol 1Some dynamic content is not announced by NVDA.
/reviewSystem resource usage during peak activity Pass symbol Pass symbol Not applicable symbol Pass symbol Pass symbol Fails accessibility check symbol 0Fully accessible in current tests.
/confirmationMemory consumption during test Pass symbol Pass symbol Pass symbol Pass symbol Issue found or partially working symbol Pass symbol 1Link text underlined but too light in high contrast mode.

Download a sample accessibility report

Legend

  • Pass symbol = Pass
  • Issue found or partially working symbol = Issue found or partially working
  • Fails accessibility check symbol = Fails the check
  • Not applicable symbol = Not applicable for that page (e.g. no voice input)

4.5 Design system testing

Design system testing checks that the service follows the Unified Design System and provides a consistent, clear, and user-friendly experience across gov.cy. It helps make sure the service is citizen-focused, accessible, and easy to use.

Use the Unified Design System documentation and apply the 4.2 – Consistent styles with the Digital Services Design System checklist to review each page of the service. This helps confirm that all design elements, layouts, and interactions align with the expected standards.

4.5.1 Documenting Test Results

4.5.1.1       Sample Test Report

Page URL / Name4.1 Simple for all users4.2.1 Design system principles4.2.2 Include the HTML 5 important globals4.2.18 Error messages and error summary4.2.20 Check answers patternNotes
/start Pass symbol Pass symbol Pass symbol Not applicable symbol Pass symbol Clear entry page. Simple intro and CTA.
/your-details Pass symbol Pass symbol Pass symbol Issue found or partially working symbol Not applicable symbol Error if name missing just says “There is a problem”. Needs more helpful error.
/add-child Pass symbol Pass symbol Pass symbol Issue found or partially working symbol Not applicable symbol Error message for invalid date is too generic. Suggest specific format hint.
/child-parent-type Issue found or partially working symbol Pass symbol Pass symbol Pass symbol Fails accessibility check symbol Select the parent type” is unclear for some users. Too much technical terminology.
/review Pass symbol Pass symbol Pass symbol Pass symbol Pass symbol Well structured summary. All data visible and editable.
/confirmation Pass symbol Pass symbol Pass symbol Not applicable symbol Issue found or partially working symbol Confirmation message does not explain expected timeline or reference number.

Download a sample Design System test report

Legend

  • Pass symbol = Pass
  • Issue found or partially working symbol = Issue found or partially working
  • Fails accessibility check symbol = Fails the check
  • Not applicable symbol = Not applicable for that page (e.g. no inputs in this page)

4.6 Device Testing

Device testing ensures that digital services work well on a variety of devices and browsers. Digital services should work well in:

  • iOS safari
  • MacOS safari
  • Windows Chrome
  • Windows Edge
  • Windows Firefox
  • Android Chrome
  • Android Samsung browser

4.6.1 Documenting Test Results

4.6.1.1 Sample Test Report

Page URL / NameiOS SafarimacOS SafariWindows ChromeWindows EdgeWindows FirefoxAndroid ChromeAndroid SamsungIssues FoundNotes
/start Pass symbol Pass symbol Pass symbol Pass symbol Pass symbol Pass symbol Pass symbol 0Fully functional.
/your-details Pass symbol Pass symbol Issue found or partially working symbol Pass symbol Pass symbol Issue found or partially working symbol Pass symbol 2Input border cut off on Windows Chrome & Android Chrome.
/add-child Pass symbol Pass symbol Pass symbol Pass symbol Pass symbol Pass symbol Pass symbol 0Responsive layout works well.
/review Pass symbol Pass symbol Pass symbol Issue found or partially working symbol Pass symbol Pass symbol Pass symbol 1Button focus styling missing in Windows Edge.

Download a sample device test report

Legend

  • Pass symbol = Pass
  • Issue found or partially working symbol = Issue found or partially working