What is quality assurance?
Quality assurance (QA) is a method for preventing mistakes and defects in manufactured products and avoiding problems when delivering products or services to customers. There is a difference between quality assurance vs quality control, the latter being a process by which entities review the quality of all factors involved in production.
How does Darwoft define quality?
A quality product is one that achieves our team’s high standards. We set specific standards for each new project and then measure each product against those standards.
What is the purpose and outcome of our QA process?
During QA, we define what the process is going to be that will ultimately help the end-user perceive the high quality of this product. Our QA process is how we can ensure that we deliver the best product with the best quality.
What does Darwoft’s QA process look like?
At Darwoft, we work in two-week sprints as we develop a product, and we apply our QA process to each sprint so that we can always deliver a shippable product at the end of each sprint.
There are four steps to our QA process:
- Test Plan: Determine the various items we need to test, including how we test them, who is responsible for testing, and which type of testing we will use.
- Design: Create the different scenarios or test cases that we need to test.
- Execution: Execute the test cases designed in the previous step, either manually or automatically. (More about the benefits of automated testing below.)
- Report: Report back on the current quality of the product, including where the current problem areas lie.
Why do we use automated test execution?
Automated tests have many benefits. They allow us to get a small view of the quality and status of the application as it is being integrated inside the existing process to deliver that application. Our system allows us to upload an application to an environment and automate the entire process and testing from there. We can measure the quality of the application between different environments and between different stages of the process in a way that is not possible with manual execution.
How do we create the architecture behind automated tests?
Based on the specific needs of each product, we need to find the right tool that is the most efficient. Our team develops and uses package designs with various possible tools that can be applied to make the automation process easier and more efficient. For example, report creation and logging APIs may already be integrated into a package design so that they can run automatically without further hands-on work from our developers.
Let’s dive deeper into each step of the QA process:
1. Creating a Test Plan
How we are going to measure quality to define how and what we will measure in order to determine if we are delivering a product of high quality. We use certain indicators to measure quality, and these indicators are designed when we create our test plan. These indicators include both a Requirements Coverage percentage and a Test Execution Coverage percentage.
The Requirements Coverage percentage looks at the total number of functionalities from the backlog (an ideal list of functionalities to be included in the product) versus the total number that has actually been included in the product. For example, we may determine that “Quality means that we are going to cover 90% of the total number of requirements.” This means that if there are 10 desired functionalities, we need to cover at least nine in order to consider this a high-quality product.
Test Execution Coverage looks at the number of test cases already run versus the total number of test cases to be run. For example, we might consider a quality product one that covers 70% of total test cases to be run.
It is important to note that these percentages are not fixed. Rather, we define new standards for each product based on the specific needs of that customer and project. A proof of concept project, for example, does not need to meet the same level of standards as a final shippable product.
The key objective of the test plan is to define how at the end of our process we are going to measure the quality of the product and make sure that the entire team is in agreement.
2. Designing the Test Cases
The QA team starts to work on the design of the test cases in parallel with the developers working on feature construction. Each test case will be linked to a specific user story.
We always test against one Happy Path, meaning that everything goes smoothly for the user. This is the ideal user experience within the application. In addition, we test against multiple Fail Cases, in which things go wrong. These are unideal user experiences. Fail Cases are the most important scenarios to test because we need to explore and address all the possible alternatives to a Happy Path.
At the end of this step, our team will have a Test Case List to guide the execution process in the next step. The Test Case List includes fields for the priority and creator for each test.
3. Executing Tests
When we execute a test, we have two options in which to execute it: manually or automatically. Manual execution is very straightforward, in that we go to a page, execute the test and compare the tests. Automated execution involves developing the test cases and programming them to run automatically behind the scenes.
We start with a certain number of test cases that were created during the Design phase, but this is not the final list of test cases that we will execute. Instead, as more and more user stories or features are added, we add new test cases. Automated execution allows us to add more test cases efficiently, without adding an exorbitant amount of time and work for our team. Darwoft’s automated testing process is one of our best practices for ensuring quality and delivering a product on schedule.
Our automated test process includes three stages:
- Test Plan Analysis
- Automated Test Development
- Automated Test Execution
With the Test Plan Analysis, our team designs the complete automated process. We take Tests Suites as input, and we generate automated Smoke and Regression tests as an Output with their Reports. Smoke tests are one of our most essential groups of test cases, and they make sure that the application will not break when being used. Both Smoke and Regression tests run in the lower environment, meaning the dev environment or an integration environment.
The automation testing strategy is based on the Page Objects Model Pattern, which starts being developed in the early stage of the QA process in order to obtain a maintainable environment to start writing automated tests.
For our Automated Test Development, our team uses a prepared environment for our tests. We use Cypress for our automation scripts, and we generate reports accordingly. Cypress is an end-to-end testing framework for web test automation. It enables our front-end developers and test automation engineers to write automated web tests in JavaScript, the main language used for developing websites.
The automated test development process includes:
- Defining the environments for automated testing
- Design of Page Objects accordingly to software design
- Translate tests cases into JavaScript steps throughout Cypress to use Page Objects
To begin Automated Test Execution, all manual tests converted to automation are executed in order to obtain a result of the released system. This allows us to obtain bugs in a faster and more efficient way or to repeat the tests and make the whole process more trustworthy and measurable.
This process generates a report that gives information about the execution itself, the environment, the type of test, and all the failures.
We consider Test Reports to be one of the biggest advantages of automated testing. Our team is able to retrieve information about each run, environment, and test parameters, and then make an immediate analysis of the software’s general status after an automated Regression test.
Test Reports include a wealth of valuable information:
- Executed Tests
- Passed Tests
- Failed/Total Tests
- Statistic per tests and per suite
- Environment data
Additionally, we have the Cypress Dashboard in order to save records of the tests, with complete detail of runs, integrated into GitHub and CI platforms.
Throughout the automated test execution process, we divide up tests with the intention of covering the biggest number of application tests possible. We do this by integrating test cases into our continuous integration process. This means that every time a developer is integrating new code, a set of test cases runs automatically, letting the developer know if the new code is breaking something or not. In addition, daily tests are run to ensure a regular testing cycle even if new code has not been uploaded.
4. Reporting on Quality
We make it a point to maintain a high level of quality for our products at Darwoft. This means delivering products without any critical issues. We use our reporting system to report back on issues discovered during testing, and we don’t release a product until our reports show that the product has met all of our quality standards.
Our QA dashboard includes various reports with information about how the product is performing against different quality indicators, giving our team and our customers a snapshot of the current status of the product. These reports include:
- Test Cases List: includes a key and summary, priority ranking, name of the creator, date of creation, and status. This report helps to measure the number of test cases designed by our quality assurance analyst.
- Test Execution Progress Report: shows the number of test cases that have passed our standards and helps to measure the test execution coverage.
- Bug Report: reports on a failed test case and helps to measure the percentage of bugs by status, percentage of bugs by severity, and percentage of critical or highest-severity bugs. The highest severity bugs are those that are actually crashing the product. This report tells how close we are to delivering a product. Again, we do not release a product if it has any critical issues.
- Traceability Report: measures the requirements coverage and test cases coverage by user story. This report tells us if we are covering all of the scenarios and test cases for one user story.
- Automation Coverage Report: helps to measure automated test coverage. We provide this report because more and more of our customers want to have automated test cases in order to have everything included in their continuous integration machine. The automated process means that every time our customers integrate new code, a test automatically runs to make sure that everything is still working correctly.
- Testing Summary for Customers: provides all of the information about an upcoming release, including the test cases list.
Darwoft’s QA process helps us to ensure that we always deliver the highest quality product possible. We set incredibly high standards for our team and our products, and we don’t release a new product until it has met these standards. Quality assurance makes it possible for us to guarantee performance and to deliver products that scale. This is why our customers see us as a trusted partner in bringing their ideas to market time and time again.