Beta
Last updated
Was this helpful?
Last updated
Was this helpful?
Now we’re ready to take our best idea and start building it and using it for real. The beta phase breaks down into:
creating a service
testing it with a small group of users
running it publicly for further feedback and improvement.
A beta is where we see our full digital project delivery team working together. We have researchers, designers, business analysts, architects, software developers and testers all collaborating to deliver for the client.
We generally work in sprints, 2 week-long planned and prioritised chunks of work. But the format is slightly different to a discovery or alpha. In a beta, there are a number of different tracks running in parallel: design, analysis, development and testing. And usually, our analysis and design teams are at least one sprint ahead of development.
The research and design teams create prototypes and test them with users, while the technical teams set up the architecture and infrastructure to support them. The designs are handed over for development and testing while the design team moves to designing the next feature.
The beta process can vary in time, depending on the client’s requirements and project deadlines. We continue to work until we have a product that’s ready to launch.
We consider how the solution will scale and be supported when live. This might mean creating a plan to replace the existing service, or it might be paving the way for a new service to launch successfully. Both need change management and service support capabilities, so it’s all hands on deck to bring the service to life.
Our projects are delivered in line with the Government Service Manual.
While an alpha is all about experimentation, a beta is about building and testing the actual solution. Because people begin to use the actual service, the beta phase confirms that it works well in context. It allows further iterations to be made, based on real-world use.
We create a Minimum Viable Product (MVP). This is a basic version of the product, that can be tested to prove the value of the service. It can, but doesn’t need to, be launched to the public.
Often the beta that is launched goes beyond MVP functionality. Its value is to test the service with real users for real feedback. This is done initially through a private beta, available to a limited audience, where any issues discovered pose less risk than when the service is launched at scale. It’s also an opportunity to engage with early adopters and build advocacy for the service.
When confidence is high enough, a public beta is launched. This exposure to the full audience brings in additional insight around potential improvements the service. In some projects, the client chooses to go straight to full launch.
Moving through the beta phase validates our assumptions while testing performance against scalability. We minimise risk and maximise potential by constantly seeking to learn from and iterate the service.
The design track team members will be familiar to us, having gone through a discovery and alpha already. The user researchers, content designers and interaction designers all work with the business analysts (BAs) to keep the design track progressing. We also have an accessibility specialist to make sure the products we’re delivering are accessible for all users.
The development track introduces a new team:
Software developers - write the code
Quality assurance (QA) testers - make sure the system behaves in exactly the way it’s meant to
Devops - responsible for supporting the environments where the code lives
Technical architects - pick up the work of the solution architect, who did a high-level review of the requirements for the solution in alpha. The technical architect has the responsibility of mapping in detail exactly what technology will be used, how it will be used, and what it needs to connect to. They produce high and low level designs that the infrastructure and network engineers require to build the hosting environment for the application.
All of this is overseen by a delivery manager and scrum master.
Delivery manager - runs the project, directs resources, reports on progress, manages risks, keeps the project objectives clear and is the main point of contact for the client.
Scrum master - keeps the development team organised and progressing the deliverables that have been agreed. They’re also responsible for guiding the team through the scrum process, ensuring ceremonies take place, artefacts are produced and resolving any blockers. Depending on the size of the project, the delivery manager and scrum master roles may be performed by the same person.
The client plays a vital role throughout beta. They define the project’s goals and objectives, provide subject matter expertise, help to prioritise the backlog and assist in sprint planning. Having the client so closely involved ensures that product decisions best reflect the organisation’s goals.
As with all delivery phases, we start with a kick off session to establish ways of working. We confirm roles and responsibilities, how people like to work and the ways in which we’ll need to flex our project delivery style to meet the needs of the client.
Then in sprint zero, we have 2 key objectives: sign off on project readiness and start planning and prioritising the work.
To sign-off on project readiness, we have to review the quality of the alpha completed earlier. This is particularly important when another supplier has done the discovery and alpha work before us. The business analysts, designers and the solutions architect check that we have everything needed to start the betaas planned without changes to the scope or budget. If any changes need to be made, they should be done before any beta phase design or coding has started.
Once approved, we define a plan for how to deliver the project within time and budget. For each feature of development, we estimate the effort that will be required and make a decision on its overall approach. For example, if we should code the functionality, licence existing services, or use Open Source components. All of the work we will be designing and coding ourselves is put into a sprint plan according to time, budget and logical technical order.
With a high-level plan complete, we’re ready to enter sprint one.
Each design sprint starts with a detailed sprint plan which prioritises the features we’ll address based on what’s necessary for the MVP, the time and budget available and any external pressures. We also clarify the technical approach to make sure we’re using the most efficient and appropriate technical solutions possible.
With each feature we build, we create low fidelity prototypes (the minimum functionality and visual elements needed to share a concept). They are sense-checked with the development team to confirm they follow the technical designs covered in project planning, and with the client to make sure we’re delivering what they need and expect.
After we’ve signed off the low fidelity designs, the designers can start creating high fidelity prototypes. These include the full functionality in a high fidelity user interface. The user researchers then carry out usability testing on the prototypes with end users to make sure the system works easily and in a logical way. All findings feed back into the designs as we continually iterate and refine.
At the same time, the business analyst works with the software developers to create detailed process maps showing the required business logic for any given feature. They map out what the system should do so the developers can break the work down into components. They also make technical decisions on data and its formats and characteristics in the system.
As part of the handover to development, both the technical and user-centred design requirements are documented. Specific outputs from the design track are pulled together to support the user stories that form the product backlog including:
designs (in Figma)
design documentation (in Confluence)
data characteristics
business logic and rules
process maps
Similarly, the first thing the development team does in each sprint is sprint planning. They agree how many prioritised tickets can be addressed in that sprint, and the scrum master facilitates how the work will be managed.
The software developers work on the prioritised stories, writing code to bring the ideas to life. The QA team check the stories meet the acceptance criteria ensuring the overall quality of the development. And the business analysts check that we’re still meeting the original business and user need we set out to address.
As the testers identify bugs, triage sessions will take place to review and determine a priority. Depending on the priority, work can either be assigned back to the developer for immediate action or placed in the backlog to be picked up at a later date.
In each sprint, our teams showcase what they have designed and developed to both internal and external stakeholders. The show-and-tells are a great opportunity to share what’s been achieved, sense-check it with the client and get buy-in for the project as a whole. It also gives us the opportunity to address any challenges that might arise.
The integration of our design and development teams is crucial to the successful outcome of a project.
In addition to QA testing, we also run both User Acceptance Testing (UAT) and usability testing.
UAT is the client’s opportunity to check the software does what it was designed to do in real-world situations before the service is launched. We work with the client and specified users to check the application's functionality and that it fits into the client’s wider business processes.
Usability testing makes sure the product is intuitive and easy to use. There’s no point having a perfectly functioning system if no one knows what button to push when. Most of this will have already been ironed out in the design track, but it’s always good to check the end product works well for the user too. Testing the coded work also allows users to experience and assess the whole service, rather than just being exposed to bits at a time as part of the earlier design testing.
After it’s passed all the tests, the code is ready to be pushed into a live environment.
Once the service is built, it’s ready to be tested as a whole by real users.
There are 3 major milestones that are usually met by the end of a beta; a private beta, a public beta launch and agreement that the service is ready for the live phase. We might be commissioned to do both, or the client might prefer to move straight to launch.
Private beta. Once the system is fully coded, it can be launched to a live environment. But you don’t have to make it live for everyone. A private beta is the opportunity to launch the new service to a smaller subset of the user base, to test and iterate it before the big launch. As clients and their users identify any problems and potential improvements in the service, these are prioritised and addressed by designers and developers.
Public launch. We’re finally ready to make the service live. That problem we started off exploring back in a discovery has a solution that is ready to be launched to the public. This is a full-scale launch, with all relevant users onboarded and training and user guides provided to the client.
Ready to move to live. The project team sign off that the service is working well enough in beta that it can move into the live delivery phase. Everything required to do this is documented in the beta report.
When the service is running in beta, we provide additional support to users. Public beta is where any real-life issues might come out in the wash, and we’re on hand to fix them. If it’s a bug, then it’s shipped back to development. If it’s an enhancement, it goes through prioritisation and approval processes with the client before being sent to our design track.
Depending on what’s been agreed with the client, we will run the public beta for a set period of time with support from both the project team and live team. We are on hand while the client’s live team settles in before they take over the running of the service in the live phase. However, it might be that we’re commissioned to run the service in live. Or there might be another organisation trusted to run the service.
Either way, the most important thing for moving into a live stage is clear and accurate documentation. This will be collated in the beta report which includes:
Cloud architecture – details of infrastructure and services
Deployment and rollback procedures – the deployment strategy, including how updates are rolled out and rolled back in the event of failures
System requirement specifications – including functional and non-functional requirements as well as configurations
Operational support documentation – details on monitoring, logging calls and system alerts
Service Level Agreements (SLAs) - specifying metrics such as uptime, latency and support response times
We also provide technical training to support the systems we’ve built. And finally, there will be a prioritised backlog of work that can be addressed in the live environment.
In some instances for government projects, we may need to pass a service assessment reviewing everything above before we can officially progress to the live delivery phase.