5 Core Practices for Efficient Visual Testing
Shreya Bose, Technical Content Writer at BrowserStack - September 17, 2020
What is Visual Testing?
Visual Testing, sometimes called visual UI testing, verifies that the software user interface (UI) appears correctly to all users. Essentially, visual tests check that each element on a web page appears in the right shape, size, and position. It also checks that these elements appear and function correctly on a variety of devices and browsers. In other words, visual testing factors in how multiple environments, screen sizes, OSes, and other variables will affect software.
Visual tests generate, analyze, and compare browser snapshots to detect if any pixels have changed. These pixel differences are called visual diffs (sometimes called perceptual diffs, pdiffs, CSS diffs, UI diffs).
Benefits of Visual Testing
- By verifying that a website or app has a visually perfect UI, visual testing helps create a solid first impression on real-world users.
- If the UI is distorted in any manner, it adversely affects the software functionality. For example, let’s say the UI has a Submit button that must be clicked to enter relevant data. Functional tests depict that the button is working perfectly. But when accessing the application on certain mobile devices, the button is covered up by a text box.In this case, even though a function is working correctly, it still leads to a sub-par user experience without adequate visual testing.
- Visual testing verifies that software appears as it is supposed to, across different browsers and device screens. Think of it as the optical counterpart to cross browser testing.
To learn more, refer to Visual Testing: A Beginner’s Guide.
Core Practices for Efficient Visual Testing
- Perform system tests first: Don’t run visual tests before ensuring that every feature works exactly as intended. Invest maximum effort at the unit tests level so that later-stage tests (usually covering larger sections of the software) do not return significant issues – which inevitably take more effort to resolve.When in doubt about what order tests should be structured in, just refer to the testing pyramid. Tests should go in the following order: Unit Tests > Integration Tests > UI Tests
- Create small specs: Creating smaller specs is helpful because if an issue does emerge, it is much easier to detect. Specs with greater detail can not only lead to more errors (because they tend to cover larger sections of software) but also make debugging more difficult because more code needs to be investigated.It is best to limit each spec to the layout details of a single web element. Don’t create one spec for one website page. Each webpage comprises multiple web elements, and its corresponding spec will require enormous amounts of detail. Instead, craft small specs for each element that accurately tests them, and when put together, fully defines the webpage.
- Use dedicated specs: There are millions of elements on every single website and app. To run visual tests that take each of them into account, testers will have to use structured, dedicated specs to ensure that they do not miss any visual elements.Try using the following blueprint for visual tests:
Header > Main Section > Scroll Section 1 > Scroll Section 2 > Footer
Create a full spec for the page by using the above as the main sections. Then, start running tests for each section.
Take the example of the header. It may look something like this:
Automated visual tests should be programmed to test each element and gauge whether they align with the baseline requirements in terms of pixels.
- Use relevant logs: It is easy to assume that visual UI testing bugs can be identified simply by looking at the images of the bug-ridden interface and comparing them to baseline images.This isn’t always applicable. Sometimes, the discrepancy is so minuscule that it can be detected in terms of pixel differences but not with the human eye. In such cases, the tester needs more data to detect the cause of the discrepancy.
Logs related to the software test help provide the data testers need. Visual logs with timestamps can be helpful with this. But what’s necessary is some kind of key identifier that can be linked to the visual error. Otherwise, testers have to comb through all the code and images to figure out the issue.
Consider using a tool that would help with logging. For instance, Percy by BrowserStack grabs screenshots, identifies visual changes, and informs the testing team of all changes.
- Start with the basics: When verifying a web element, start with the following questions:
- Is the element of the right size?
- Is the element placed within a parent element, if it is supposed to be so?
- Is the element inside another element, if it is supposed to be so?
- Is the element located on the top/bottom/right/left of another element?
- Are all elements aligned accurately relative to each other and in the broader context of the webpage structure?
Obviously, like most forms of testing, visual testing should be ideally automated. This requires the right tool: one which manages the test process and generates reports for manual testers to study and approve/disapprove.
Percy by BrowserStack is one of the best-known tools for automating visual testing. It captures screenshots, compares them against the baseline images, and highlights visual changes. With increased visual coverage, teams can deploy code changes with confidence with every commit.
With Percy, testers can increase visual coverage across the entire UI and eliminate the risk of shipping visual bugs. They can avoid false positives and get quick, deterministic results with reliable regression testing. They can release software faster with DOM snapshotting and advanced parallelization capabilities designed to execute complex test suites at scale.
Percy’s SDKs enable users to easily install them and add snapshots. The visual testing tool also integrates with CI/CD pipelines and source code management systems to add visual tests to each code commit. As mentioned before, once tests are initiated, Percy takes screenshots and identified visual changes. Then, testers can easily review visual diffs to decide if the changes are legitimate or require fixing.
By incorporating the core practices detailed above, visual testing can be streamlined for greater speed, accuracy, and efficiency. Couple them with the right tools to deliver visually perfect applications quickly and consistently.