When it comes to software testing, a unidirectional method may not always work for every team. At times, delivering software quality, and doing so with the highest level of efficiency, calls for a two-pronged approach.
While automation testing may be the standard approach, there will be multiple scenarios where manual testing may be preferred, or it may even be the only option. Creating a holistic QA strategy which establishes a fine balance between manual and automation testing, thus becomes indispensable.
However, with teams working remotely, how can organizations help them make the shift to manual testing seamlessly? With non-negotiable requirements such as uninterrupted access to test infrastructure and successful collaboration, is it even possible to achieve this?
In this webinar, David Burns, core contributor to Selenium and co-editor of the W3C WebDriver, answers such questions and shares the best practices for remote testing teams working cross-functionally.
Here’s a round-up of some of the questions that he answered during the webinar:
How do we move towards Shift Left testing in a small company where only one tester is available?
I believe that the smaller the group, the easier it is to to start with Shift Left testing. It is easier to get testers involved in the design phase when the group is smaller. They don’t need to pair program with everyone but they must make sure that quality is baked into every process.
How will you conduct Risk Analysis?
The way I tend to start is by looking at what are the catastrophic failures that we can and cannot manage. For example, if you have an installer or auto-updater then those must never break and have as much testing as possible. Work from there down the chain of features until you get “this looks ugly” kind of bugs that are not accessibility-related.
If it is going to impact the way people go about with their core work or will impact the company financially, either in the form of your brand being degraded or by having to give customers their money back, will be the highest risk areas that need to be improved.
Given a limited QA resource, how would you balance doing Manual and Automation work?
I think this is the perfect case for trying to weigh up strategic vs. tactical wins. If you need short term wins, then probably do manual testing. If you need your tests to survive a long time and the product is going to continue to be iterated on, then make sure you are adding automation. This will allow your exploratory tester to know what is covered and they can look elsewhere.
Do you automate edge cases or only the happy flows?
I always look at the workflows and then make sure that both negative as well as positive tests are catering for that. If I see opportunities to catch edge cases then I will, but edge cases are exactly that and tend to be missed in automation. However, exploratory testers are really good, in my experience, at finding those issues.
I'm a manual tester learning Selenium (Java). My team has been slowly adopting automated testing into our agile framework, but it's been difficult to get the product planning team to see it as a value-add. Do you have any suggestions for me to better communicate the importance of making automation part of our planning process?
I think a team is failing at agile if they are seeing testing as a bolt on and something that comes at a certain point.
When it comes to automation, it is seen as a task that pays for itself the more that it is run. The more you are having to argue to get it into a process and fight to get people to maintain it, the less chance it has of succeeding. Automation is code and needs to be treated at the same level as the code that your customers use. Next time a bug comes in from a customer and there is no automation, point it out. If an area regresses, point that out. Developers, like everyone else, are fallible—and automation helps them not be fallible.
If this doesn’t happen then the team is never going to be efficient.
Should we start with automation if the product is still in the development phase and changing on a daily basis?
Unless your product is radically changing user flows daily, you should start testing straight away. If your automation is breaking constantly and you are only noticing it in CI, there is likely to be a fundamental communication problem between the QAs and the developers on the team.
One reason could be that developers don’t believe that they should own the test code, which is wrong in my opinion. The other may be that the application is not being built with automation in mind. If you’re just moving the location of an element on the page and test breaks, then you have poorly chosen selectors for finding the element.
How can we start automation so that we can get maximum results?
Automation pays for itself the longer it runs. If you can start getting some unit tests from the start then you are on your way to automating your tests. Getting a CI/CD pipeline to make sure that you are building and testing as much as possible, and getting your configuration management into your CI/CD pipeline also improves your test automation.