Each experienced QA Engineer knows the taste of champagne on a beer budget. Whenever you have to run the regression test you find yourself in front of a strict deadline. Most of the projects have late functionality changes, urgent marketing inquiries and tech debt which delay regression run stated in the release procedure. “Not enough time” – is not what end users are willing to hear. So what can be really done in such circumstances?
1. Control Device/OS coverage
If you are not looking forward to going to the dogs with the hangar of devices, here are some hints to be used there.
The first one is obvious, use analytics systems to identify which devices are worth spending time for YOUR audience. We were surprised to rebuild our device stack when stopped using market statistics and focused on our audience. It’s really different from product to product even for the apps with 10M+ DAU, despite the law of large numbers. As a result, for each our client we build fitted pool of devices so there’s not been a sniff of generalist approaches.
The second one is OS mixing. Let’s figure out how it works. If you have a suite of regression test cases, split it into as many parts as the number of Operating Systems combinations you need to cover. Particularly, you need to run a single suite in 4 threads and after you finish re-iterate with mixing (opposite to running 4 suites in single thread). And here is the first magic result: testing time is the same, but you get bugs identified earlier.
The third hint there is to shuffle devices inside OS group. Each iteration for the same OS we use different device models from the pool. Check the table below to get the combination.
|Iteration 1||Iteration 2||Iteration 3||Iteration 4|
|Test suite part1||Android 4.4.x (LG G2)||Android 5.x
|Android 7.x (Galaxy S7)|
|Test suite part2||Android 5.x (Nexus 5)||Android 6.x
|Test suite part3||Android 6.x
(HTC One M9)
|Android 7.x (Galaxy S8 Edge)||Android 4.4.x
|Test suite part4||Android 7.x (Galaxy S8)||Android 4.4.x
(HTC One M7)
2. Rank your cases
When you end up in a schedule hole, it’s good to know where to allocate your efforts in the remaining time. Ideally in the way you don’t miss any critical issues.
The way here is to rank the cases, so you run the most valuable. But how to find out the relative value of the case? You can follow Risk-Based paradigm, but it really needs time to invest. We have empirically come to a shorter way.
|The rank of a test case = importance value + frequency value.
Importance value (from 1 to 5, where 5 is the highest one) indicates how important that functionality for user/ our business (weighted average).
Frequency value (from 1 to 5, where 5 is the highest one) indicates how many users use that functionality, and how often (inspect analytics for the conclusion).
To come up with test suites: assign the ranks and sort the test cases list by rank descending. Top 10% we call Minimal Acceptance test. Top 40% is Advanced Acceptance test. And Full Regression is the full list correspondingly.
3. Use appropriate test layer for target area
A Really obvious thing, but is often forgotten on the phase of test planning (“we’ll come back to this stuff later, let’s begin building test cases etc”). When a manual QA effort is used to test API between client and backend parts of the application, testing scenarios are huge and time consuming. Some areas (for example complex calculations or data flows inside an application) are cheaper to test in white-box paradigm, some areas are best covered with series of beta-tests. That is often cheaper than spending the effort of high-qualified test engineers. The key rule here is to perform test analysis keeping in mind all the toolset available.
4. Combine regression test suites
When I mentioned splitting the test suites into parts for OS mixing, I didn’t bring up your attention to one more benefit of keeping such suites fragmented. When some new code has been merged in, you don’t always need to run Advanced Acceptance test fully. If your testers can identify the impact of the changes (affected classes and methods) you can figure out what areas should be regressed. In this scenario having predefined test suites split by functionality is worth investing in.
Below is an example for running a tracking app:
|Advanced Acceptance pt1||Registration/Login/Profile/FTUE||113 test cases|
|Advanced Acceptance pt2||Track/Log Workout||198 test cases|
|Advanced Acceptance pt3||Social Sharing/Third-party Integration/Partners||168 test cases|
|Advanced Acceptance pt4||Settings, Privacy, Challenges||185 test cases|
Whenever some core Privacy features are touched, you should run Advanced Acceptance pt4 test suite. This static approach of predefined test suites combined by the areas is the first step. Let’s have a look how we can truly unveil the full potential of change impact analysis.
5. Automate change impact analysis
It’s very useful to trace your black box test cases to certain code areas (build white-box basement for black-box tests). In this case each time manual QA team receives a build for regression, they can also get the test suite marked by the script with the affection level assigned.
|Test suite for build_02345b|
|TC Number||TC Name||Rank||Affection level|
|TC2678879||User is able to change privacy to “Only me”||7||Strong|
|TC2678880||Application reflects the privacy changes||7||Strong|
|TC2678881||User is able to track workout||9||Not affected|
|TC2678882||User is able to share workout in FB||8||Medium|
6. Implement CI
We learned the lesson: the more your team is freed up from repeatable tests, the deeper will the testing will be. The most traditional way to cope with a continuous growth of regression testing workload is automation. An Ideal flow there is if a manual team starts its testing after automated run is finished. For one of our projects we use nightly regression runs in the cloud with 1200+ cases completed over the cloud of 30 devices and passed scenarios are excluded from following manual runs. In terms of Continuous Integration, we have each build tested with sanity check. And a large test run is scheduled as a nightly job.
7. Use farm of devices to benefit from multithreading
Imagine we run regression tests in short release cycle, when we receive builds 5 times per day. Having your automated cases connected to a farm of real devices (we avoid using emulators for testing) is very beneficial in terms of regression testing. Here’s a demonstration of related efficiency of automated tests, when we have proper infrastructure. We have 2 modes of test running: single and multithreaded.
- We use multi threaded mode when we need to cover a lot of test cases as fast as possible.
- We use single threaded mode when we need to cover a lot of devices as deep as possible.
Example: Test results achieved in 1 minute
Instead of Conclusion
The methods listed above often allow us to provide cost-effective solutions to our clients. An Average savings rate after all the implementation costs is about 20-30% depending on the project – it’s a lot when you are in the situation of a tight budget, and definitely worth investing.
Work less, do more.