6 Things to avoid when writing Selenium Test Scripts
Shreya Bose, Technical Content Writer at BrowserStack - April 21, 2020
Since Selenium is the most widely used test automation framework in existence, it is safe to assume that the majority of individuals running automated tests are using it. The basis of every Selenium test is the test script.
Without a well-written test script, Selenium tests are bound to be flaky and ineffective, if not outright impossible to run. A badly written test script creates more work for developers and testers instead of making their lives easier.
A previous article has already discussed how to write good test cases. This one will discuss a few things to avoid when creating Selenium test scripts.
- Incorrect use of Waits and Sleeps: Implicit and Explicit Waits are commonly used in automated Selenium testing to wait for a predetermined duration before executing a command. When used correctly, they are handled separately in order to test dynamic content, identify web elements to interact with, and test all functionalities.However, if testers mix Implicit and Explicit Waits, it leads to unpredictable wait times or a timeout. This results in unstable behavior within tests. Similarly, excessive use of thread Sleeps leads to test failure, and they need to be judiciously used.
It is best to acquire an in-depth understanding of Selenium Wait commands before creating any Selenium scripts.
- Large test cases with multiple chained assertions: Truly useful Selenium scripts are reusable and easy to maintain. This is quite hard to do if one is writing large test cases that cover large parts of the application under test at one go. The larger a test, the harder it is to identify bugs. Obviously scanning through thousands of lines of test code to find what went wrong will always pose a massive challenge.This is addressed by implementing thorough unit testing and relying on Page Object Patterns in order to reduce code duplication.
- Automating the wrong tests: Nothing is worse for a Selenium test script than being written for the wrong tests. If automated tests are created for areas that require manual testing, testers have to keep fixing automation code instead of looking for actual bugs in the software. If one automates the testing of new features or of an unstable UI, one can expect a significant waste of time and effort.Scripts must be written for the right tests. Selenium test scripts are best created and used to automate user functions or processes that are repetitive. For example, if a form needs to be tested by entering 500 different values, then it is a scenario perfect for automation. Similarly, every time a new code is added to the codebase, regression tests must be run to ensure that adding new code does not malign the efficacy of the application’s existing features. Again, a test that requires automation.
Any Selenium script created to automate everything will fail. Perform risk analysis of different elements of the software, conduct research on where automation will yield most results, and then start creating scripts accordingly.
- Insufficient Test Reporting: Without sufficient reporting and documentation, Selenium tests are bound to fail. Imagine that a bug has been identified. Without documentation, the tester will not be able to identify the source of the bug, or which developer created the code in which the bug manifested. Consequently, they don’t know who should examine the bug or even what the purpose of the code is.It is imperative that one names and labels tests so that they are easy to manage for the team. Additionally, mechanisms must be laid in place to facilitate easy communication among team members so that any anomalies can be immediately reported to the relevant personnel. Sharing screenshots of the bug is also a great practice. Incorporating these practices will streamline the test script, allowing it to yield optimal results.
- Bad Validation Practices: Testers usually use test scripts with validation to ascertain if a functionality (a login page, for example) is behaving as expected. Without validating these elements, the test script loses its purpose. The same applies even when one is validating visual elements to check only if surface-level UI is working. Imagine what happens if a user tries to place an order but cannot because the tester did not query the database.
- Test on a single browser: This might seem obvious, but it still needs to be said. Cross-browser testing is absolutely imperative to the success of any website or app. Do the research, scour market analytics and create a list of browser-device-OS combinations that the target audience is likely to access the software from.It is easy to use Selenium for cross-browser testing. Use parallel testing to test on multiple browsers, as well as speed up results. With ever-increasing competition in the digital marketplace, users will not hesitate to uninstall an app or leave a website (and never return) at the first sign of a bug.
In case testers do not have an in-house device lab, they can resort to cloud-based testing solutions that offer real browsers to test on. BrowserStack provides a cloud Selenium grid of 2000+ real browsers and devices to test on. It allows the creation and execution of automated Selenium testing in real user conditions. This enables testers to observe, monitor and verify software behavior in the exact environment that potential users or customers will operate it in.
To create result-driven, sophisticated and productive Selenium scripts, one must know what is true with respect to Selenium command scripts. Combine the best practices with a cautious eye on the most common reasons for script failure. This enables the creation of test cases that do exactly what they are meant to with minimal effort for the developer.