Testing Process for a Digital Service
27/05/25 | A. Understanding User Needs | All Guidelines and Documentation | B. Leading Agile Teams | C. Choosing the right technologyVersion 1.0
Functional and Non-Functional Testing Guide
Version control table
| Version | Date | Comments |
| 1.0 | 27/5/2025 | Published Document |
1. Introduction
Testing helps ensure your service works well, is secure, and is accessible.
There are two main types of testing:
- Functional Testing
- Non-Functional Testing
This guide offers a simple approach for testing a digital service. It covers functional and non-functional testing, outlines key steps, and includes practical tips to help teams set up, run, and document tests effectively.
Following these steps will help your team catch issues early and deliver a reliable service that works well for users.
2. Why Testing Matters
Testing helps you:
- Catch bugs early before they affect users
- Save time and cost on fixes later
- Make services smoother and easier to use
- Meet security, accessibility, and quality standards
Every test you run now means fewer complaints, issues, and rework after launch.
3. Functional Testing
Functional testing ensures that a digital service works as expected. It checks whether each feature does what it’s supposed to do, helping to catch and fix issues before users encounter them.
In addition to testing the service against predefined scenarios, try to intentionally break the flow, enter invalid data, skip required steps, or trigger edge cases. This helps make sure that all validations are working properly and that the service handles unexpected behavior in a clear and user-friendly way.
3.1 Setting Up Functional Testing
Before testing, make sure you understand the service goals, user needs, and business rules from the Discovery and Design phases. You will need to:
- Identify the different flows of the service (user journeys)
- Write test scenarios
- Identify and prepare test users
- Use a staging environment
3.1.1 Define Test Scenarios
- List key features to test (CyLogin, CyNotify, file upload, API data calls, application submission, payments, etc).
- Consider different user actions and possible edge cases (e.g., what happens if a user enters invalid data?).
- Use simple Test Scenario formats to test the different user journeys of the service:
Example:
| Test Scenario | Expected Result |
|---|---|
| User submits a form with valid data | Form is submitted successfully, and a confirmation message appears |
| User submits a form with missing required fields | The system shows an error message highlighting the missing fields |
3.1.2 Prepare a Testing Environment
- Use the Staging/Testing environment of the service (not the production)
- Create test users to cover all possible scenarios on the CY Login Test Environment. For guidance, contact cds-support@dits.dmrid.gov.cy
- Use realistic (but fake) data
- Match the staging environment to the production environment (as much as possible).
3.2 How to Run Tests and Report Issues
3.2.1 Execute Test Scenarios
- Follow the Test Scenarios one by one and record the results.
- Report on the results of all tests. For failed tests, include a Failed Issue ID.
Example Test Execution Report:
| Test Scenario | Test user | Expected Result | Actual Result | Pass/Fail | Failed Issue ID | Notes |
|---|---|---|---|---|---|---|
| User submits a form with valid data | citizen25 | Form submitted successfully | Form submitted successfully | Pass | – | |
| User submits a form with missing fields | citizen32 | Error message appears | No error message | Fail | BUG-003 | Validation missing on “Email” field |
3.2.2 Log Issues and Fix Bugs
- Report failed tests to the relevant team member, including screenshots, test user details, steps to reproduce, the tested scenario, and its reference number (if available). We recommend using the Issues section of your team’s GitHub repository to log and track these reports
- After the issue is fixed, run the test again to confirm it’s fully resolved
Example Issue/Bug Report if GitHub Issues is not used:
| Issue ID | Date | Tester | Test User | Steps to Reproduce | Expected Result | Actual Result | Severity | Suggested Fix | Assigned to | Status |
|---|---|---|---|---|---|---|---|---|---|---|
| BUG-001 | 25/02/2025 | Christina Papadopoulou | citizen23 | 1. Login using test user 2. Open the form 3. Leave ‘Email’ and ‘Phone Number’ empty 4. Click ‘Submit’ | Error message highlights missing fields | No error message shown; user doesn’t know why submission failed | High | Add validation and error messages for required fields | Fix in progress | |
| BUG-002 | 26/02/2025 | Andreas Andreou | citizen45 | 1. Login using test user 2. Fill out form with valid data 3. Correct a mistake in ‘Date of Birth’ field 4. Try to submit again | Submit button becomes active after correcting the error | Submit button stays disabled even after fixing the field | Medium | Enable the submit button once all validation errors are cleared | Reported |
Download a sample issue report
4. Non-Functional Testing
Non-functional testing checks how well a digital service performs beyond just its features. It covers areas like speed, reliability, scalability, and overall user experience. This includes:
- Performance Testing (by tracking key performance indicators),
- Load and Stress Testing,
- Security Testing (Penetration Testing), and
- Accessibility Testing.
These tests help ensure the service is stable, secure, and works well under different conditions.
4.1 Performance Testing
Measure how fast, stable, and available the service is.
The Performance Lead is responsible for monitoring and evaluating the service’s performance, availability, responsiveness, and reliability. Regular and accurate tracking helps ensure the service meets user needs and aligns with government standards
The primary focus of this work is the measurement of the five core Key Performance Indicators (KPIs) defined in the Performance Framework. The Performance Lead must ensure that the necessary data is being collected to accurately measure these:
- Time for a transaction: How long does it take for users to make a transaction using the service?
- User satisfaction: What percentage of users are satisfied with their experience using the service?
- Transaction completion rate: What percentage of transactions do users complete?
- Digital take-up: What percentage of users choose the digital service to complete their task over non-digital channels?
- Service availability: What is the percentage of service uptime and downtime?
In addition to the core KPIs, teams should regularly monitor supporting performance metrics to get a more detailed view of service behavior. These include:
- Page load times
- Response times
- Failure rates
- Uptime percentages
- Error frequencies
To support performance monitoring, DSF uses a range of tools and data sources:
- Matomo (web analytics tool) – for monitoring user experience,
- Pingdom – for tracking service availability,
- Feedback page – for capturing user satisfaction,
- APIs statistics –for analyzing trigger events.
Note: The use of Matomo on-premise is mandatory. It has been selected as the horizontal solution for managing web analytics for government digital services, developed according to the Service Standard and hosted on Gov.Cy. Pingdom and Power BI are powerful monitoring and analytics tools used in DSF to track and visualize performance metrics of digital services. Similar tools, though, can be used to achieve the same monitoring and analysis tasks.
Visual dashboards should be used to track and interpret this data. Regular reviews of both core and supporting KPIs are essential for identifying issues and driving performance improvements.
4.1.1 Documenting Test Results
4.1.1.1 Key Performance Indicators (KPIs) Sample Report

4.1.1.2 Matomo Sample Performance Metrics


4.1.1.3 Uptime Sample Report

4.2 Load and Stress Testing
Simulate traffic to check how the service performs under pressure.
DevOps and Infrastructure teams are responsible for setting up and maintaining load and stress test tools, such as Apache JMeter, to ensure the service can handle expected and unexpected levels of user activity, measure response times, and support capacity planning. Performance is further validated through Load and Stress Testing of both Front-end and Back-end APIs.
Front-end Testing ensures that user interfaces remain responsive under typical and peak loads, while Back-end Testing assesses the system’s resilience, latency, and processing efficiency. These tests help uncover breaking points and support the development of fallback strategies.
4.2.1 Documenting Test Results
4.2.1.1 Sample Test Report
Include screenshots or graphs from your load testing tool that show results for both front-end and back-end components. These results should demonstrate how the service behaves under realistic traffic and stress conditions.
The following metrics are typically recorded during load and stress testing:
| Metric | Description |
|---|---|
| Response time (avg/min/max) | Time taken to complete a request |
| Throughput | Number of requests handled per second |
| Concurrent users | Number of users active during the test |
| CPU usage under load | System resource usage during peak activity |
| Memory usage | Memory consumption during test |
| Error rate | Percentage of failed or timed-out requests |
These metrics help identify performance bottlenecks, capacity limits, and areas that may need optimisation.
Tip: Save your test results in a structured format (e.g. screenshots, CSV export, or a visual dashboard) and include them in your final test report.
Sample: Summary Report Table – 200 users in 60 seconds

Sample: Response time Front-end 200 users in 60 seconds

Sample: 200 users submitted 580 characters via API in 60 seconds

Sample: 200 users retrieved data from the API in 60 seconds

Note: Adjust load test figures based on your service’s expected usage and performance goals.
4.3 Security Testing (Penetration Testing)
Security testing helps identify and fix vulnerabilities before a service goes live. An independently certified penetration tester must carry out a Web Application Security Audit (WASA) in line with the relevant policy.
To comply with the WASA policy:
- Use secure coding practices
- Address known risks (e.g. OWASP Top 10)
- Run regular audits and security reviews
Penetration testing simulates real-world attacks to check how the service holds up. All findings, risks, and fixes must be documented before launch.
4.3.1 Documenting Test Results
Submit the final WASA report confirming the service passed with no security issues. If the report contains sensitive information, anonymise it before sharing. The report should include:
- Date of the assessment
- Scope of the penetration test
- Key findings
- Actions taken to fix the issues
4.4 Accessibility Testing
Accessibility testing checks that the service works for all users, including those with disabilities. It helps identify issues that might affect users who rely on assistive technologies or need specific design considerations.
Your service must meet at least WCAG 2.1 AA standards. Test key areas such as:
- Keyboard-only navigation
- Screen reader support
- Colour contrast
- Text alternatives for images and icons
Use the methods and checklist provided in the Unified Design System (UDS) under the Accessibility Statement Pattern – Test your service or site for accessibility. This will help you run both manual and tool-based checks and track any issues that need to be fixed.
4.4.1 Documenting Test Results
4.4.1.1 Sample Test Report
| Page URL / Name | Axe DevTools (Reporting tool) | Keyboard Nav | Screen Reader | Voice Control | Zoom/Magnifier | Contrast Mode | Visual impaired user (user research) | Issues Found | Notes |
|---|---|---|---|---|---|---|---|---|---|
| /start | Time taken to complete a request | 0 | User research not yet conducted. | ||||||
| /your-details | Number of requests handled per second | 3 | Zoom at 200% overlaps label and field. | ||||||
| /add-child | Number of users active during the test | 1 | Some dynamic content is not announced by NVDA. | ||||||
| /review | System resource usage during peak activity | 0 | Fully accessible in current tests. | ||||||
| /confirmation | Memory consumption during test | 1 | Link text underlined but too light in high contrast mode. |
Download a sample accessibility report
Legend
- = Pass
- = Issue found or partially working
- = Fails the check
- = Not applicable for that page (e.g. no voice input)
4.5 Design system testing
Design system testing checks that the service follows the Unified Design System and provides a consistent, clear, and user-friendly experience across gov.cy. It helps make sure the service is citizen-focused, accessible, and easy to use.
Use the Unified Design System documentation and apply the 4.2 – Consistent styles with the Digital Services Design System checklist to review each page of the service. This helps confirm that all design elements, layouts, and interactions align with the expected standards.
4.5.1 Documenting Test Results
4.5.1.1 Sample Test Report
| Page URL / Name | 4.1 Simple for all users | 4.2.1 Design system principles | 4.2.2 Include the HTML 5 important globals | … | 4.2.18 Error messages and error summary | … | 4.2.20 Check answers pattern | … | Notes |
|---|---|---|---|---|---|---|---|---|---|
| /start | … | … | … | Clear entry page. Simple intro and CTA. | |||||
| /your-details | … | … | … | Error if name missing just says “There is a problem”. Needs more helpful error. | |||||
| /add-child | … | … | … | Error message for invalid date is too generic. Suggest specific format hint. | |||||
| /child-parent-type | … | … | … | Select the parent type” is unclear for some users. Too much technical terminology. | |||||
| /review | … | … | … | Well structured summary. All data visible and editable. | |||||
| /confirmation | … | … | … | Confirmation message does not explain expected timeline or reference number. |
Download a sample Design System test report
Legend
- = Pass
- = Issue found or partially working
- = Fails the check
- = Not applicable for that page (e.g. no inputs in this page)
4.6 Device Testing
Device testing ensures that digital services work well on a variety of devices and browsers. Digital services should work well in:
- iOS safari
- MacOS safari
- Windows Chrome
- Windows Edge
- Windows Firefox
- Android Chrome
- Android Samsung browser
4.6.1 Documenting Test Results
4.6.1.1 Sample Test Report
| Page URL / Name | iOS Safari | macOS Safari | Windows Chrome | Windows Edge | Windows Firefox | Android Chrome | Android Samsung | Issues Found | Notes |
|---|---|---|---|---|---|---|---|---|---|
| /start | 0 | Fully functional. | |||||||
| /your-details | 2 | Input border cut off on Windows Chrome & Android Chrome. | |||||||
| /add-child | 0 | Responsive layout works well. | |||||||
| /review | 1 | Button focus styling missing in Windows Edge. |
Download a sample device test report
Legend
- = Pass
- = Issue found or partially working