Product profitability directly depends on its quality. Creating successful products requires ensuring the highest quality standards. In fact, 88% of users are likely to abandon an app if they frequently encounter glitches and bugs. 80% of respondents churn within 90 days, according to another survey. Here’s why quality assurance is an integral part of the software development life cycle.
Quality Assurance (QA) is an umbrella term covering all aspects of ensuring the required product quality level at all technological stages of software development, release, and operation.
However, simply hiring a few freelance testers who are not involved in your development process won’t do much. Establishing an effective QA process requires flexibility and expertise.
I’m a project manager and business analyst at Rocketech. Our team is engaged in many projects – long- and short-term. In some cases, short-term projects were assigned to me one by one. And we needed to set up a QA/testing process for each of them from scratch to ensure quality. For the first few projects, our team and I tried different approaches, spent some time on that, made some mistakes, and, finally, developed a set-up QA process.
Each project is unique. Short-term projects require 3-6 months of development. Sometimes you need 1-2 months for a kick-off – discovery and preparation, including setting up communication with the customer and getting required details, requirements, and access. From my experience, short-term projects with one scrum team take 4-6 months.
I divide all projects into:
● Three categories – web, mobile, and mobile+web;
● Two types – new and existing (existing code, features need to be added or adjusted).
So the first step is to set up the required configuration for projects – create a Git repository, outline the project structure, configure CI, provide required rights and accesses, etc.
The core of our QA process is properly configured continuous integration (CI). First, I will explain how it works for web projects and then describe how it works in mobile development.
Our QA Process
Let’s imagine that we are starting the first or second sprint. Remember, projects start from sprint zero. We have a ‘develop’ branch where all existing code is stored. We also have a ‘master’ branch, from which the code is delivered to production stores. The names ‘develop’ and ‘master’ are informal yet conventional in developers’ communities. At the beginning of the sprint, we create a new branch from the master one with the name of the sprint. Let it be ‘Branch – Sprint 2’.
When developers start working on a feature (let it be feature 11), from ‘Branch – Sprint 2’, they create a new branch, ‘Branch – Sprint 2, Feature 11’. That is where all new code written for this feature is stored. Once a new feature is ready for testing and other developers have reviewed the code, it’s pushed to our repo. CI creates a new separate test environment specifically for this feature where QA tests it. If testing reveals any issues, developers reopen the feature and fix them. Once there are no issues, the feature is merged into ‘Branch – Sprint 2’.
Here’s how the process goes – one by one, we test features in separate environments and merge them to the current sprint branch when testing reveals no issues. This approach helps us make the QA process clearer. It’s easier to find the root cause and define which part of the code is wrong.
In the second half of the sprint, when all features merged in ‘Branch – Sprint 2’, QA performs integration testing to ensure that all features work together correctly. After testing, we perform a demo for the customer. When everything is approved by the customer, we deploy the code to production. We merge ‘Branch – Sprint 2’ into the master branch and make the deployment.
This approach works for web projects. However, we also applied it to mobile development, with the only difference. We don’t create separate environments for each feature but run separate app builds with these features and test them on mobile devices.
Rocketech Expert Choice
We use TestFlight for testing iOS and Firebase for Android.
This approach is understandable and easy to follow. However, let’s look at some challenging flow cases. These cases include scenarios when few features are dependent on each other. For example, the front-end part can’t be tested (without mocks/stubs) with a lack of back-end implementation. In Jira, we create a user story and a few subtasks for front-end and back-end implementations. This way, we build a separate branch for testing user stories but not the feature (subtask).
- Kill used test environments to avoid wasting server resources.
- Mind database connections – there are two ways of implementation when you create a new environment:
1. Creating separate DB for this test environment;
2. Linking new environments to existing DB.
Both ways are purpose-specific. However, the second option is often preferable, as QA doesn’t spend time on creating test data (accounts, etc.) in this test environment. And last but not least – test documentation. We create a ‘Master checklist’ on Confluence where QA writes down each test and its status. The checklist is divided into Jira tickets. Every ticket has a list of required tests. Once QA receives the ticket and performs the tasks, they update the status of each test.
Quality assurance is not simple bug-fixing or improving software testing. QA best practices also include optimizing all development processes and enhancing communication both within the team and with the client. At Rocketech, we strive to meet our partners’ requirements and create the highest quality products that sell. Contact us for more information!